Test Report: Docker_Linux_crio 22081

                    
                      502ebf1e50e408071a7e5daf27f82abd53674654:2025-12-09:42698
                    
                

Test fail (27/415)

x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:910: skipping: crio not supported
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable volcano --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable volcano --alsologtostderr -v=1: exit status 11 (239.742984ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:18.175470   24096 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:18.175609   24096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:18.175619   24096 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:18.175623   24096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:18.175824   24096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:18.176085   24096 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:18.176436   24096 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:18.176454   24096 addons.go:622] checking whether the cluster is paused
	I1209 01:57:18.176532   24096 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:18.176543   24096 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:18.176912   24096 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:18.194514   24096 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:18.194568   24096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:18.212018   24096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:18.301911   24096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:18.301985   24096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:18.329769   24096 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:18.329788   24096 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:18.329791   24096 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:18.329795   24096 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:18.329797   24096 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:18.329802   24096 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:18.329805   24096 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:18.329808   24096 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:18.329811   24096 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:18.329825   24096 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:18.329833   24096 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:18.329837   24096 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:18.329845   24096 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:18.329850   24096 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:18.329858   24096 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:18.329863   24096 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:18.329868   24096 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:18.329873   24096 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:18.329876   24096 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:18.329878   24096 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:18.329884   24096 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:18.329886   24096 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:18.329889   24096 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:18.329906   24096 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:18.329914   24096 cri.go:89] found id: ""
	I1209 01:57:18.329958   24096 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:18.344071   24096 out.go:203] 
	W1209 01:57:18.345304   24096 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:18.345324   24096 out.go:285] * 
	* 
	W1209 01:57:18.348362   24096 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:18.349533   24096 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:442: registry stabilized in 3.164096ms
addons_test.go:444: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-g2qp5" [6aced207-434f-45dc-8005-52d4e7307bea] Running
addons_test.go:444: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002191955s
addons_test.go:447: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-nhhw6" [a92d010f-222e-4542-8e2a-29b8429da13a] Running
addons_test.go:447: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00324085s
addons_test.go:452: (dbg) Run:  kubectl --context addons-598284 delete po -l run=registry-test --now
addons_test.go:457: (dbg) Run:  kubectl --context addons-598284 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:457: (dbg) Done: kubectl --context addons-598284 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.20903986s)
addons_test.go:471: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 ip
2025/12/09 01:57:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable registry --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable registry --alsologtostderr -v=1: exit status 11 (279.82848ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:39.660205   25703 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:39.660549   25703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:39.660561   25703 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:39.660567   25703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:39.660821   25703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:39.661160   25703 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:39.661617   25703 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:39.661652   25703 addons.go:622] checking whether the cluster is paused
	I1209 01:57:39.661792   25703 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:39.661812   25703 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:39.662214   25703 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:39.684439   25703 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:39.684494   25703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:39.706539   25703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:39.806059   25703 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:39.806146   25703 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:39.842194   25703 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:39.842254   25703 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:39.842266   25703 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:39.842272   25703 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:39.842276   25703 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:39.842281   25703 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:39.842286   25703 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:39.842290   25703 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:39.842294   25703 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:39.842302   25703 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:39.842306   25703 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:39.842310   25703 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:39.842314   25703 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:39.842318   25703 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:39.842322   25703 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:39.842333   25703 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:39.842338   25703 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:39.842345   25703 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:39.842349   25703 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:39.842354   25703 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:39.842364   25703 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:39.842368   25703 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:39.842372   25703 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:39.842377   25703 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:39.842381   25703 cri.go:89] found id: ""
	I1209 01:57:39.842428   25703 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:39.859751   25703 out.go:203] 
	W1209 01:57:39.860954   25703 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:39.860983   25703 out.go:285] * 
	* 
	W1209 01:57:39.864345   25703 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:39.869001   25703 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.72s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.38s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:383: registry-creds stabilized in 3.570966ms
addons_test.go:385: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-598284
addons_test.go:392: (dbg) Run:  kubectl --context addons-598284 -n kube-system get secret -o yaml
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (231.080942ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:38.840086   25362 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:38.840400   25362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:38.840410   25362 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:38.840415   25362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:38.840600   25362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:38.840864   25362 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:38.841165   25362 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:38.841184   25362 addons.go:622] checking whether the cluster is paused
	I1209 01:57:38.841268   25362 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:38.841279   25362 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:38.841645   25362 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:38.858860   25362 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:38.858905   25362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:38.875534   25362 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:38.966949   25362 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:38.967033   25362 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:38.994412   25362 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:38.994439   25362 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:38.994447   25362 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:38.994451   25362 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:38.994455   25362 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:38.994461   25362 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:38.994466   25362 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:38.994471   25362 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:38.994478   25362 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:38.994488   25362 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:38.994499   25362 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:38.994503   25362 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:38.994508   25362 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:38.994513   25362 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:38.994518   25362 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:38.994525   25362 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:38.994533   25362 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:38.994537   25362 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:38.994540   25362 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:38.994543   25362 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:38.994546   25362 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:38.994548   25362 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:38.994551   25362 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:38.994553   25362 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:38.994556   25362 cri.go:89] found id: ""
	I1209 01:57:38.994595   25362 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:39.009751   25362 out.go:203] 
	W1209 01:57:39.010962   25362 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:39.010982   25362 out.go:285] * 
	* 
	W1209 01:57:39.013952   25362 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:39.015269   25362 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:269: (dbg) Run:  kubectl --context addons-598284 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:294: (dbg) Run:  kubectl --context addons-598284 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:307: (dbg) Run:  kubectl --context addons-598284 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [c3f9ac70-0555-42e6-961e-ea2ec68107c3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [c3f9ac70-0555-42e6-961e-ea2ec68107c3] Running
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003583919s
I1209 01:57:47.093184   14552 kapi.go:150] Service nginx in namespace default found.
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:324: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.480001985s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:340: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:348: (dbg) Run:  kubectl --context addons-598284 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 ip
addons_test.go:359: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-598284
helpers_test.go:243: (dbg) docker inspect addons-598284:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a",
	        "Created": "2025-12-09T01:56:07.77688206Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16970,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T01:56:07.805870805Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a/hosts",
	        "LogPath": "/var/lib/docker/containers/af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a/af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a-json.log",
	        "Name": "/addons-598284",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-598284:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-598284",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a",
	                "LowerDir": "/var/lib/docker/overlay2/436809ada0d646a429a9d1bcedf27ddd1f37521b473593791d2a4bdf91725f22-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/436809ada0d646a429a9d1bcedf27ddd1f37521b473593791d2a4bdf91725f22/merged",
	                "UpperDir": "/var/lib/docker/overlay2/436809ada0d646a429a9d1bcedf27ddd1f37521b473593791d2a4bdf91725f22/diff",
	                "WorkDir": "/var/lib/docker/overlay2/436809ada0d646a429a9d1bcedf27ddd1f37521b473593791d2a4bdf91725f22/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-598284",
	                "Source": "/var/lib/docker/volumes/addons-598284/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-598284",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-598284",
	                "name.minikube.sigs.k8s.io": "addons-598284",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8a930a2cd7ef80013d6dbd4ab7fd70855f90b1b360390174f9c9db4402805326",
	            "SandboxKey": "/var/run/docker/netns/8a930a2cd7ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-598284": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0182f7928830a74743f077e940f476da5a02ae5531a91dbf01f6402ec74d0736",
	                    "EndpointID": "d372fdf1933e1b672f77161db45b875da4b8b9d12892ebbf2c4136d97e436db7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "92:24:09:61:65:af",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-598284",
	                        "af613a4a6c36"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-598284 -n addons-598284
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-598284 logs -n 25: (1.019314879s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-013418 --alsologtostderr --binary-mirror http://127.0.0.1:35749 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-013418 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ -p binary-mirror-013418                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-013418 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ addons  │ enable dashboard -p addons-598284                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-598284                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ start   │ -p addons-598284 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:57 UTC │
	│ addons  │ addons-598284 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-598284                                                                                                                                                                                                                                                                                                                                                                                           │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │ 09 Dec 25 01:57 UTC │
	│ addons  │ addons-598284 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ ip      │ addons-598284 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │ 09 Dec 25 01:57 UTC │
	│ addons  │ addons-598284 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ ssh     │ addons-598284 ssh cat /opt/local-path-provisioner/pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │ 09 Dec 25 01:57 UTC │
	│ addons  │ addons-598284 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ ssh     │ addons-598284 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-598284 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │                     │
	│ addons  │ addons-598284 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │                     │
	│ ip      │ addons-598284 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-598284        │ jenkins │ v1.37.0 │ 09 Dec 25 02:00 UTC │ 09 Dec 25 02:00 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:45.179972   16330 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:45.180095   16330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:45.180105   16330 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:45.180111   16330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:45.180370   16330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:55:45.180830   16330 out.go:368] Setting JSON to false
	I1209 01:55:45.181539   16330 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2294,"bootTime":1765243051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:45.181604   16330 start.go:143] virtualization: kvm guest
	I1209 01:55:45.183258   16330 out.go:179] * [addons-598284] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 01:55:45.184289   16330 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 01:55:45.184319   16330 notify.go:221] Checking for updates...
	I1209 01:55:45.186642   16330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:45.187856   16330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 01:55:45.188826   16330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 01:55:45.189960   16330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 01:55:45.190955   16330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 01:55:45.192154   16330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 01:55:45.212933   16330 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 01:55:45.213048   16330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 01:55:45.262867   16330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-09 01:55:45.254148119 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 01:55:45.262987   16330 docker.go:319] overlay module found
	I1209 01:55:45.264532   16330 out.go:179] * Using the docker driver based on user configuration
	I1209 01:55:45.265685   16330 start.go:309] selected driver: docker
	I1209 01:55:45.265701   16330 start.go:927] validating driver "docker" against <nil>
	I1209 01:55:45.265713   16330 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 01:55:45.266238   16330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 01:55:45.321321   16330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-09 01:55:45.31189074 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 01:55:45.321463   16330 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 01:55:45.321716   16330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 01:55:45.323153   16330 out.go:179] * Using Docker driver with root privileges
	I1209 01:55:45.324212   16330 cni.go:84] Creating CNI manager for ""
	I1209 01:55:45.324262   16330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 01:55:45.324271   16330 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 01:55:45.324320   16330 start.go:353] cluster config:
	{Name:addons-598284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-598284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1209 01:55:45.325537   16330 out.go:179] * Starting "addons-598284" primary control-plane node in "addons-598284" cluster
	I1209 01:55:45.326541   16330 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 01:55:45.327557   16330 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 01:55:45.328492   16330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 01:55:45.328527   16330 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 01:55:45.328536   16330 cache.go:65] Caching tarball of preloaded images
	I1209 01:55:45.328583   16330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 01:55:45.328642   16330 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 01:55:45.328658   16330 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 01:55:45.328975   16330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/config.json ...
	I1209 01:55:45.329005   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/config.json: {Name:mk6a13e76bffff1fe136e5fbf8142f787a177248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:55:45.343761   16330 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c to local cache
	I1209 01:55:45.343860   16330 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory
	I1209 01:55:45.343874   16330 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory, skipping pull
	I1209 01:55:45.343878   16330 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in cache, skipping pull
	I1209 01:55:45.343885   16330 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c as a tarball
	I1209 01:55:45.343892   16330 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c from local cache
	I1209 01:55:57.421936   16330 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c from cached tarball
	I1209 01:55:57.421975   16330 cache.go:243] Successfully downloaded all kic artifacts
	I1209 01:55:57.422027   16330 start.go:360] acquireMachinesLock for addons-598284: {Name:mk44b5bd868e7b7f8b62000352ad95d542ea5dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 01:55:57.422128   16330 start.go:364] duration metric: took 78.237µs to acquireMachinesLock for "addons-598284"
	I1209 01:55:57.422155   16330 start.go:93] Provisioning new machine with config: &{Name:addons-598284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-598284 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 01:55:57.422254   16330 start.go:125] createHost starting for "" (driver="docker")
	I1209 01:55:57.423773   16330 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1209 01:55:57.423984   16330 start.go:159] libmachine.API.Create for "addons-598284" (driver="docker")
	I1209 01:55:57.424019   16330 client.go:173] LocalClient.Create starting
	I1209 01:55:57.424106   16330 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 01:55:57.461025   16330 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 01:55:57.520927   16330 cli_runner.go:164] Run: docker network inspect addons-598284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 01:55:57.537548   16330 cli_runner.go:211] docker network inspect addons-598284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 01:55:57.537609   16330 network_create.go:284] running [docker network inspect addons-598284] to gather additional debugging logs...
	I1209 01:55:57.537631   16330 cli_runner.go:164] Run: docker network inspect addons-598284
	W1209 01:55:57.552763   16330 cli_runner.go:211] docker network inspect addons-598284 returned with exit code 1
	I1209 01:55:57.552789   16330 network_create.go:287] error running [docker network inspect addons-598284]: docker network inspect addons-598284: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-598284 not found
	I1209 01:55:57.552814   16330 network_create.go:289] output of [docker network inspect addons-598284]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-598284 not found
	
	** /stderr **
	I1209 01:55:57.552893   16330 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 01:55:57.569604   16330 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e4ab60}
	I1209 01:55:57.569651   16330 network_create.go:124] attempt to create docker network addons-598284 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 01:55:57.569699   16330 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-598284 addons-598284
	I1209 01:55:57.612792   16330 network_create.go:108] docker network addons-598284 192.168.49.0/24 created
	I1209 01:55:57.612834   16330 kic.go:121] calculated static IP "192.168.49.2" for the "addons-598284" container
	I1209 01:55:57.612899   16330 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 01:55:57.628020   16330 cli_runner.go:164] Run: docker volume create addons-598284 --label name.minikube.sigs.k8s.io=addons-598284 --label created_by.minikube.sigs.k8s.io=true
	I1209 01:55:57.643552   16330 oci.go:103] Successfully created a docker volume addons-598284
	I1209 01:55:57.643651   16330 cli_runner.go:164] Run: docker run --rm --name addons-598284-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-598284 --entrypoint /usr/bin/test -v addons-598284:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 01:56:03.951245   16330 cli_runner.go:217] Completed: docker run --rm --name addons-598284-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-598284 --entrypoint /usr/bin/test -v addons-598284:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib: (6.307538532s)
	I1209 01:56:03.951282   16330 oci.go:107] Successfully prepared a docker volume addons-598284
	I1209 01:56:03.951339   16330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 01:56:03.951351   16330 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 01:56:03.951400   16330 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-598284:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 01:56:07.709675   16330 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-598284:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.758206606s)
	I1209 01:56:07.709704   16330 kic.go:203] duration metric: took 3.758349437s to extract preloaded images to volume ...
	W1209 01:56:07.709785   16330 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 01:56:07.709818   16330 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 01:56:07.709869   16330 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 01:56:07.762156   16330 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-598284 --name addons-598284 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-598284 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-598284 --network addons-598284 --ip 192.168.49.2 --volume addons-598284:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 01:56:08.036330   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Running}}
	I1209 01:56:08.053914   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:08.072812   16330 cli_runner.go:164] Run: docker exec addons-598284 stat /var/lib/dpkg/alternatives/iptables
	I1209 01:56:08.114590   16330 oci.go:144] the created container "addons-598284" has a running status.
	I1209 01:56:08.114615   16330 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa...
	I1209 01:56:08.226334   16330 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 01:56:08.253741   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:08.274089   16330 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 01:56:08.274112   16330 kic_runner.go:114] Args: [docker exec --privileged addons-598284 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 01:56:08.323788   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:08.345424   16330 machine.go:94] provisionDockerMachine start ...
	I1209 01:56:08.345513   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:08.369045   16330 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:08.369361   16330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 01:56:08.369379   16330 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 01:56:08.500248   16330 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-598284
	
	I1209 01:56:08.500281   16330 ubuntu.go:182] provisioning hostname "addons-598284"
	I1209 01:56:08.500350   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:08.518116   16330 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:08.518338   16330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 01:56:08.518355   16330 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-598284 && echo "addons-598284" | sudo tee /etc/hostname
	I1209 01:56:08.652739   16330 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-598284
	
	I1209 01:56:08.652820   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:08.670004   16330 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:08.670301   16330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 01:56:08.670327   16330 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-598284' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-598284/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-598284' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 01:56:08.792355   16330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 01:56:08.792381   16330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 01:56:08.792412   16330 ubuntu.go:190] setting up certificates
	I1209 01:56:08.792428   16330 provision.go:84] configureAuth start
	I1209 01:56:08.792470   16330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-598284
	I1209 01:56:08.808571   16330 provision.go:143] copyHostCerts
	I1209 01:56:08.808656   16330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 01:56:08.808803   16330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 01:56:08.808905   16330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 01:56:08.809007   16330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.addons-598284 san=[127.0.0.1 192.168.49.2 addons-598284 localhost minikube]
	I1209 01:56:08.845401   16330 provision.go:177] copyRemoteCerts
	I1209 01:56:08.845455   16330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 01:56:08.845486   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:08.861265   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:08.951701   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 01:56:08.970096   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 01:56:08.985439   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 01:56:09.000721   16330 provision.go:87] duration metric: took 208.283987ms to configureAuth
	I1209 01:56:09.000746   16330 ubuntu.go:206] setting minikube options for container-runtime
	I1209 01:56:09.000898   16330 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:56:09.000988   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:09.017366   16330 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:09.017628   16330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 01:56:09.017663   16330 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 01:56:09.272420   16330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 01:56:09.272454   16330 machine.go:97] duration metric: took 927.008723ms to provisionDockerMachine
	I1209 01:56:09.272467   16330 client.go:176] duration metric: took 11.848439258s to LocalClient.Create
	I1209 01:56:09.272493   16330 start.go:167] duration metric: took 11.848507122s to libmachine.API.Create "addons-598284"
	I1209 01:56:09.272504   16330 start.go:293] postStartSetup for "addons-598284" (driver="docker")
	I1209 01:56:09.272517   16330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 01:56:09.272596   16330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 01:56:09.272658   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:09.289745   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:09.381544   16330 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 01:56:09.384630   16330 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 01:56:09.384677   16330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 01:56:09.384690   16330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 01:56:09.384745   16330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 01:56:09.384769   16330 start.go:296] duration metric: took 112.258367ms for postStartSetup
	I1209 01:56:09.385028   16330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-598284
	I1209 01:56:09.401292   16330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/config.json ...
	I1209 01:56:09.401529   16330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 01:56:09.401569   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:09.417796   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:09.504872   16330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 01:56:09.508992   16330 start.go:128] duration metric: took 12.086723888s to createHost
	I1209 01:56:09.509017   16330 start.go:83] releasing machines lock for "addons-598284", held for 12.086876985s
	I1209 01:56:09.509082   16330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-598284
	I1209 01:56:09.525776   16330 ssh_runner.go:195] Run: cat /version.json
	I1209 01:56:09.525816   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:09.525855   16330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 01:56:09.525917   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:09.543886   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:09.544684   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:09.702523   16330 ssh_runner.go:195] Run: systemctl --version
	I1209 01:56:09.708484   16330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 01:56:09.740161   16330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 01:56:09.744220   16330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 01:56:09.744287   16330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 01:56:09.766968   16330 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 01:56:09.766987   16330 start.go:496] detecting cgroup driver to use...
	I1209 01:56:09.767012   16330 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 01:56:09.767045   16330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 01:56:09.781250   16330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 01:56:09.791850   16330 docker.go:218] disabling cri-docker service (if available) ...
	I1209 01:56:09.791884   16330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 01:56:09.805918   16330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 01:56:09.821811   16330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 01:56:09.897950   16330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 01:56:09.983124   16330 docker.go:234] disabling docker service ...
	I1209 01:56:09.983181   16330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 01:56:09.999259   16330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 01:56:10.010167   16330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 01:56:10.088160   16330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 01:56:10.159913   16330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 01:56:10.170717   16330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 01:56:10.183246   16330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 01:56:10.183306   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.192052   16330 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 01:56:10.192105   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.199757   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.207279   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.214949   16330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 01:56:10.221958   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.229427   16330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.241263   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.248859   16330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 01:56:10.255215   16330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 01:56:10.255247   16330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 01:56:10.265623   16330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 01:56:10.272051   16330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:10.349683   16330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 01:56:10.479364   16330 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 01:56:10.479442   16330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 01:56:10.483018   16330 start.go:564] Will wait 60s for crictl version
	I1209 01:56:10.483071   16330 ssh_runner.go:195] Run: which crictl
	I1209 01:56:10.486282   16330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 01:56:10.508801   16330 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 01:56:10.508901   16330 ssh_runner.go:195] Run: crio --version
	I1209 01:56:10.533786   16330 ssh_runner.go:195] Run: crio --version
	I1209 01:56:10.561106   16330 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 01:56:10.562057   16330 cli_runner.go:164] Run: docker network inspect addons-598284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 01:56:10.578373   16330 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 01:56:10.582076   16330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 01:56:10.591429   16330 kubeadm.go:884] updating cluster {Name:addons-598284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-598284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 01:56:10.591538   16330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 01:56:10.591579   16330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:10.619694   16330 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 01:56:10.619711   16330 crio.go:433] Images already preloaded, skipping extraction
	I1209 01:56:10.619756   16330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:10.642594   16330 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 01:56:10.642612   16330 cache_images.go:86] Images are preloaded, skipping loading
	I1209 01:56:10.642620   16330 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1209 01:56:10.642720   16330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-598284 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-598284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 01:56:10.642777   16330 ssh_runner.go:195] Run: crio config
	I1209 01:56:10.684046   16330 cni.go:84] Creating CNI manager for ""
	I1209 01:56:10.684072   16330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 01:56:10.684091   16330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 01:56:10.684113   16330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-598284 NodeName:addons-598284 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 01:56:10.684252   16330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-598284"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 01:56:10.684319   16330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 01:56:10.691781   16330 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 01:56:10.691833   16330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 01:56:10.698796   16330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 01:56:10.710040   16330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 01:56:10.723372   16330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1209 01:56:10.734476   16330 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 01:56:10.737709   16330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 01:56:10.746516   16330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:10.823268   16330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 01:56:10.843535   16330 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284 for IP: 192.168.49.2
	I1209 01:56:10.843556   16330 certs.go:195] generating shared ca certs ...
	I1209 01:56:10.843575   16330 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:10.843715   16330 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 01:56:10.926367   16330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt ...
	I1209 01:56:10.926391   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt: {Name:mk790d55dd352f1c7ef088b4fa3cda215d478a8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:10.926557   16330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key ...
	I1209 01:56:10.926572   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key: {Name:mk512156b260a50233807f4323f62f483a367ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:10.926692   16330 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 01:56:11.011334   16330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt ...
	I1209 01:56:11.011358   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt: {Name:mka57609f918144b8e592527a59a5a66348a52f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.011518   16330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key ...
	I1209 01:56:11.011534   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key: {Name:mked7769cc0b81a98ffde923610023dc6ee34491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.011646   16330 certs.go:257] generating profile certs ...
	I1209 01:56:11.011707   16330 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.key
	I1209 01:56:11.011721   16330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt with IP's: []
	I1209 01:56:11.136936   16330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt ...
	I1209 01:56:11.136957   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: {Name:mka0584a25dc5e0099dc0467c4404ba608f812b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.137111   16330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.key ...
	I1209 01:56:11.137125   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.key: {Name:mk521c5b80fdbaf18aafa3d3a79f2225fc6dbc13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.137223   16330 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key.64adb407
	I1209 01:56:11.137243   16330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt.64adb407 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1209 01:56:11.387957   16330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt.64adb407 ...
	I1209 01:56:11.387986   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt.64adb407: {Name:mk5c8a6fab4fbb5af4410c999df04ccddbd6ca04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.388167   16330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key.64adb407 ...
	I1209 01:56:11.388184   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key.64adb407: {Name:mk177c432696896647441f42287fc80bec97a241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.388288   16330 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt.64adb407 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt
	I1209 01:56:11.388367   16330 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key.64adb407 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key
	I1209 01:56:11.388415   16330 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.key
	I1209 01:56:11.388442   16330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.crt with IP's: []
	I1209 01:56:11.518312   16330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.crt ...
	I1209 01:56:11.518339   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.crt: {Name:mke190163091ae0514c0215dcaf842dfb9f53535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.518507   16330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.key ...
	I1209 01:56:11.518524   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.key: {Name:mk7ea6183762eae8f6765d6091c424280ff7f088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.518735   16330 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 01:56:11.518773   16330 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 01:56:11.518799   16330 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 01:56:11.518830   16330 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 01:56:11.519402   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 01:56:11.536364   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 01:56:11.552192   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 01:56:11.567345   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 01:56:11.582363   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 01:56:11.597642   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 01:56:11.612873   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 01:56:11.628160   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 01:56:11.643241   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 01:56:11.660074   16330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 01:56:11.670954   16330 ssh_runner.go:195] Run: openssl version
	I1209 01:56:11.676929   16330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:11.683742   16330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 01:56:11.692939   16330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:11.696487   16330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:11.696536   16330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:11.730090   16330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 01:56:11.736678   16330 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 01:56:11.743309   16330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 01:56:11.746419   16330 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 01:56:11.746466   16330 kubeadm.go:401] StartCluster: {Name:addons-598284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-598284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 01:56:11.746532   16330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:56:11.746570   16330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:56:11.770731   16330 cri.go:89] found id: ""
	I1209 01:56:11.770802   16330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 01:56:11.777684   16330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 01:56:11.784693   16330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 01:56:11.784755   16330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 01:56:11.791450   16330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 01:56:11.791470   16330 kubeadm.go:158] found existing configuration files:
	
	I1209 01:56:11.791503   16330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 01:56:11.798128   16330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 01:56:11.798171   16330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 01:56:11.804500   16330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 01:56:11.810991   16330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 01:56:11.811035   16330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 01:56:11.817450   16330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 01:56:11.824006   16330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 01:56:11.824043   16330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 01:56:11.830294   16330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 01:56:11.836875   16330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 01:56:11.836916   16330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 01:56:11.843283   16330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 01:56:11.894889   16330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1209 01:56:11.945786   16330 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 01:56:20.658736   16330 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 01:56:20.658844   16330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 01:56:20.658988   16330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 01:56:20.659068   16330 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 01:56:20.659113   16330 kubeadm.go:319] OS: Linux
	I1209 01:56:20.659190   16330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 01:56:20.659273   16330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 01:56:20.659352   16330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 01:56:20.659420   16330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 01:56:20.659494   16330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 01:56:20.659571   16330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 01:56:20.659665   16330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 01:56:20.659730   16330 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 01:56:20.659831   16330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 01:56:20.659958   16330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 01:56:20.660047   16330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 01:56:20.660101   16330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 01:56:20.661507   16330 out.go:252]   - Generating certificates and keys ...
	I1209 01:56:20.661577   16330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 01:56:20.661676   16330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 01:56:20.661778   16330 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 01:56:20.661844   16330 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 01:56:20.661898   16330 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 01:56:20.661941   16330 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 01:56:20.662008   16330 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 01:56:20.662186   16330 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-598284 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 01:56:20.662259   16330 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 01:56:20.662417   16330 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-598284 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 01:56:20.662506   16330 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 01:56:20.662599   16330 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 01:56:20.662681   16330 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 01:56:20.662740   16330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 01:56:20.662789   16330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 01:56:20.662871   16330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 01:56:20.662928   16330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 01:56:20.662995   16330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 01:56:20.663075   16330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 01:56:20.663159   16330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 01:56:20.663218   16330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 01:56:20.664270   16330 out.go:252]   - Booting up control plane ...
	I1209 01:56:20.664345   16330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 01:56:20.664423   16330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 01:56:20.664518   16330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 01:56:20.664665   16330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 01:56:20.664787   16330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 01:56:20.664919   16330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 01:56:20.664992   16330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 01:56:20.665030   16330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 01:56:20.665149   16330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 01:56:20.665236   16330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 01:56:20.665288   16330 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001667737s
	I1209 01:56:20.665406   16330 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 01:56:20.665524   16330 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1209 01:56:20.665608   16330 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 01:56:20.665685   16330 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 01:56:20.665749   16330 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.006217248s
	I1209 01:56:20.665810   16330 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.509129045s
	I1209 01:56:20.665867   16330 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001144176s
	I1209 01:56:20.665964   16330 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 01:56:20.666076   16330 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 01:56:20.666129   16330 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 01:56:20.666291   16330 kubeadm.go:319] [mark-control-plane] Marking the node addons-598284 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 01:56:20.666339   16330 kubeadm.go:319] [bootstrap-token] Using token: uk26sz.ajgylvhs2iiq32ld
	I1209 01:56:20.668063   16330 out.go:252]   - Configuring RBAC rules ...
	I1209 01:56:20.668149   16330 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 01:56:20.668236   16330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 01:56:20.668360   16330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 01:56:20.668477   16330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 01:56:20.668584   16330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 01:56:20.668669   16330 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 01:56:20.668771   16330 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 01:56:20.668834   16330 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 01:56:20.668894   16330 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 01:56:20.668900   16330 kubeadm.go:319] 
	I1209 01:56:20.668950   16330 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 01:56:20.668956   16330 kubeadm.go:319] 
	I1209 01:56:20.669024   16330 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 01:56:20.669031   16330 kubeadm.go:319] 
	I1209 01:56:20.669052   16330 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 01:56:20.669119   16330 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 01:56:20.669199   16330 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 01:56:20.669205   16330 kubeadm.go:319] 
	I1209 01:56:20.669279   16330 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 01:56:20.669290   16330 kubeadm.go:319] 
	I1209 01:56:20.669360   16330 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 01:56:20.669373   16330 kubeadm.go:319] 
	I1209 01:56:20.669447   16330 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 01:56:20.669546   16330 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 01:56:20.669618   16330 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 01:56:20.669625   16330 kubeadm.go:319] 
	I1209 01:56:20.669703   16330 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 01:56:20.669768   16330 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 01:56:20.669774   16330 kubeadm.go:319] 
	I1209 01:56:20.669850   16330 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uk26sz.ajgylvhs2iiq32ld \
	I1209 01:56:20.669960   16330 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf \
	I1209 01:56:20.669980   16330 kubeadm.go:319] 	--control-plane 
	I1209 01:56:20.669986   16330 kubeadm.go:319] 
	I1209 01:56:20.670083   16330 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 01:56:20.670095   16330 kubeadm.go:319] 
	I1209 01:56:20.670169   16330 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uk26sz.ajgylvhs2iiq32ld \
	I1209 01:56:20.670272   16330 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf 
	I1209 01:56:20.670282   16330 cni.go:84] Creating CNI manager for ""
	I1209 01:56:20.670288   16330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 01:56:20.671536   16330 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1209 01:56:20.672496   16330 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 01:56:20.676615   16330 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1209 01:56:20.676630   16330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 01:56:20.688683   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 01:56:20.875721   16330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 01:56:20.875777   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:20.875807   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-598284 minikube.k8s.io/updated_at=2025_12_09T01_56_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=addons-598284 minikube.k8s.io/primary=true
	I1209 01:56:20.884569   16330 ops.go:34] apiserver oom_adj: -16
	I1209 01:56:20.949175   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:21.449533   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:21.949605   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:22.449984   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:22.949456   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:23.449671   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:23.949302   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:24.449479   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:24.950207   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:25.449690   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:25.949318   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:26.007442   16330 kubeadm.go:1114] duration metric: took 5.131712127s to wait for elevateKubeSystemPrivileges
	I1209 01:56:26.007470   16330 kubeadm.go:403] duration metric: took 14.261006976s to StartCluster
	I1209 01:56:26.007486   16330 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:26.007614   16330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 01:56:26.008033   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:26.008211   16330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 01:56:26.008234   16330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 01:56:26.008290   16330 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 01:56:26.008428   16330 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:56:26.008443   16330 addons.go:70] Setting storage-provisioner=true in profile "addons-598284"
	I1209 01:56:26.008451   16330 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-598284"
	I1209 01:56:26.008463   16330 addons.go:239] Setting addon storage-provisioner=true in "addons-598284"
	I1209 01:56:26.008471   16330 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-598284"
	I1209 01:56:26.008430   16330 addons.go:70] Setting yakd=true in profile "addons-598284"
	I1209 01:56:26.008502   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008507   16330 addons.go:239] Setting addon yakd=true in "addons-598284"
	I1209 01:56:26.008515   16330 addons.go:70] Setting ingress=true in profile "addons-598284"
	I1209 01:56:26.008513   16330 addons.go:70] Setting default-storageclass=true in profile "addons-598284"
	I1209 01:56:26.008539   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008544   16330 addons.go:70] Setting metrics-server=true in profile "addons-598284"
	I1209 01:56:26.008560   16330 addons.go:70] Setting volcano=true in profile "addons-598284"
	I1209 01:56:26.008564   16330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-598284"
	I1209 01:56:26.008573   16330 addons.go:70] Setting inspektor-gadget=true in profile "addons-598284"
	I1209 01:56:26.008582   16330 addons.go:70] Setting registry=true in profile "addons-598284"
	I1209 01:56:26.008589   16330 addons.go:239] Setting addon volcano=true in "addons-598284"
	I1209 01:56:26.008596   16330 addons.go:70] Setting volumesnapshots=true in profile "addons-598284"
	I1209 01:56:26.008606   16330 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-598284"
	I1209 01:56:26.008611   16330 addons.go:239] Setting addon volumesnapshots=true in "addons-598284"
	I1209 01:56:26.008547   16330 addons.go:70] Setting registry-creds=true in profile "addons-598284"
	I1209 01:56:26.008629   16330 addons.go:239] Setting addon registry-creds=true in "addons-598284"
	I1209 01:56:26.008441   16330 addons.go:70] Setting ingress-dns=true in profile "addons-598284"
	I1209 01:56:26.008611   16330 addons.go:70] Setting cloud-spanner=true in profile "addons-598284"
	I1209 01:56:26.008661   16330 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-598284"
	I1209 01:56:26.008664   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008667   16330 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-598284"
	I1209 01:56:26.008509   16330 addons.go:70] Setting gcp-auth=true in profile "addons-598284"
	I1209 01:56:26.008682   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008688   16330 mustload.go:66] Loading cluster: addons-598284
	I1209 01:56:26.008658   16330 addons.go:239] Setting addon ingress-dns=true in "addons-598284"
	I1209 01:56:26.008645   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008722   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008838   16330 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:56:26.008933   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.009062   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.009077   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.009129   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.009156   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.008503   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.009189   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.008615   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008587   16330 addons.go:239] Setting addon inspektor-gadget=true in "addons-598284"
	I1209 01:56:26.009768   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008686   16330 addons.go:239] Setting addon cloud-spanner=true in "addons-598284"
	I1209 01:56:26.009801   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008599   16330 addons.go:239] Setting addon registry=true in "addons-598284"
	I1209 01:56:26.009941   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.010280   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.010393   16330 out.go:179] * Verifying Kubernetes components...
	I1209 01:56:26.010404   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.008529   16330 addons.go:239] Setting addon ingress=true in "addons-598284"
	I1209 01:56:26.010484   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008565   16330 addons.go:239] Setting addon metrics-server=true in "addons-598284"
	I1209 01:56:26.010602   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008686   16330 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-598284"
	I1209 01:56:26.008573   16330 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-598284"
	I1209 01:56:26.010782   16330 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-598284"
	I1209 01:56:26.010803   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.009162   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.011492   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.010289   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.012505   16330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:26.020265   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.021356   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.021849   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.022361   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.022738   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.024376   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.054323   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.055601   16330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 01:56:26.059674   16330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 01:56:26.059695   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 01:56:26.059746   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.073769   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 01:56:26.074783   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 01:56:26.074809   16330 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 01:56:26.074868   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.080734   16330 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1209 01:56:26.081877   16330 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 01:56:26.081926   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1209 01:56:26.082003   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.083037   16330 addons.go:239] Setting addon default-storageclass=true in "addons-598284"
	I1209 01:56:26.085692   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.087170   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.092215   16330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:26.093344   16330 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1209 01:56:26.093373   16330 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1209 01:56:26.093395   16330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1209 01:56:26.095248   16330 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 01:56:26.095271   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1209 01:56:26.095327   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.095461   16330 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 01:56:26.095469   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 01:56:26.095486   16330 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1209 01:56:26.095506   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.097578   16330 out.go:179]   - Using image docker.io/registry:3.0.0
	I1209 01:56:26.097580   16330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:26.098568   16330 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 01:56:26.098584   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 01:56:26.098651   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.101574   16330 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 01:56:26.101598   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 01:56:26.101659   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.105278   16330 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 01:56:26.111727   16330 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 01:56:26.111745   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 01:56:26.111796   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.115436   16330 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1209 01:56:26.117678   16330 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1209 01:56:26.117693   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 01:56:26.117750   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.124438   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 01:56:26.124501   16330 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1209 01:56:26.127601   16330 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 01:56:26.127620   16330 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 01:56:26.127730   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.127829   16330 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 01:56:26.128251   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.128485   16330 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1209 01:56:26.129176   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 01:56:26.134156   16330 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 01:56:26.134178   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1209 01:56:26.134238   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.134541   16330 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 01:56:26.134564   16330 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 01:56:26.134611   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	W1209 01:56:26.136034   16330 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 01:56:26.137294   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 01:56:26.140935   16330 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-598284"
	I1209 01:56:26.140997   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.141867   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.145163   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 01:56:26.146386   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 01:56:26.150536   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 01:56:26.151618   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 01:56:26.155367   16330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 01:56:26.157439   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 01:56:26.159505   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.160183   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 01:56:26.160318   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 01:56:26.160416   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.165059   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.173159   16330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 01:56:26.173203   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.173242   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.173207   16330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 01:56:26.173428   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.173180   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.173163   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.183216   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.186552   16330 out.go:179]   - Using image docker.io/busybox:stable
	I1209 01:56:26.187717   16330 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 01:56:26.188708   16330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 01:56:26.188727   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 01:56:26.188780   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.193887   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.198971   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.201560   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.201781   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.209597   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.213395   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	W1209 01:56:26.214472   16330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 01:56:26.217712   16330 retry.go:31] will retry after 208.387831ms: ssh: handshake failed: EOF
	W1209 01:56:26.218827   16330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 01:56:26.218850   16330 retry.go:31] will retry after 203.482166ms: ssh: handshake failed: EOF
	I1209 01:56:26.222086   16330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 01:56:26.235984   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	W1209 01:56:26.239506   16330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 01:56:26.239545   16330 retry.go:31] will retry after 235.584956ms: ssh: handshake failed: EOF
	I1209 01:56:26.309229   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 01:56:26.312294   16330 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 01:56:26.312318   16330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 01:56:26.327852   16330 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 01:56:26.327882   16330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 01:56:26.336119   16330 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 01:56:26.336141   16330 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 01:56:26.338055   16330 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 01:56:26.338071   16330 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 01:56:26.340853   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 01:56:26.351530   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 01:56:26.357349   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 01:56:26.358898   16330 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 01:56:26.358916   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 01:56:26.362316   16330 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 01:56:26.362338   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 01:56:26.363848   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 01:56:26.363888   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 01:56:26.364736   16330 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 01:56:26.364753   16330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 01:56:26.365172   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 01:56:26.370061   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 01:56:26.370076   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 01:56:26.375888   16330 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 01:56:26.375906   16330 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 01:56:26.387814   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 01:56:26.390424   16330 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 01:56:26.390506   16330 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 01:56:26.395187   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 01:56:26.395352   16330 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 01:56:26.410355   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 01:56:26.410380   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 01:56:26.427379   16330 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:26.427403   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 01:56:26.442594   16330 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 01:56:26.442616   16330 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 01:56:26.444199   16330 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 01:56:26.444216   16330 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 01:56:26.449960   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 01:56:26.450041   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 01:56:26.458228   16330 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1209 01:56:26.460068   16330 node_ready.go:35] waiting up to 6m0s for node "addons-598284" to be "Ready" ...
	I1209 01:56:26.499462   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:26.500387   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 01:56:26.500463   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 01:56:26.512320   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 01:56:26.515403   16330 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 01:56:26.515478   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 01:56:26.560615   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 01:56:26.560658   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 01:56:26.594140   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 01:56:26.628817   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 01:56:26.628841   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 01:56:26.680336   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 01:56:26.685087   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 01:56:26.685171   16330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 01:56:26.686178   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 01:56:26.688541   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 01:56:26.743188   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 01:56:26.743210   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 01:56:26.789401   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 01:56:26.789428   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 01:56:26.855506   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 01:56:26.855531   16330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 01:56:26.929909   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 01:56:26.971438   16330 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-598284" context rescaled to 1 replicas
	I1209 01:56:27.480560   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.115358745s)
	I1209 01:56:27.480608   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.092757545s)
	I1209 01:56:27.480649   16330 addons.go:495] Verifying addon registry=true in "addons-598284"
	I1209 01:56:27.481068   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.117196233s)
	I1209 01:56:27.481090   16330 addons.go:495] Verifying addon ingress=true in "addons-598284"
	I1209 01:56:27.483479   16330 out.go:179] * Verifying ingress addon...
	I1209 01:56:27.483510   16330 out.go:179] * Verifying registry addon...
	I1209 01:56:27.486082   16330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 01:56:27.486246   16330 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 01:56:27.492213   16330 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 01:56:27.492231   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:27.492564   16330 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 01:56:27.492572   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:27.945216   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.445647677s)
	W1209 01:56:27.945279   16330 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 01:56:27.945284   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.432863988s)
	I1209 01:56:27.945303   16330 addons.go:495] Verifying addon metrics-server=true in "addons-598284"
	I1209 01:56:27.945303   16330 retry.go:31] will retry after 331.680871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 01:56:27.945348   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.351066607s)
	I1209 01:56:27.945385   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.265022022s)
	I1209 01:56:27.945436   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.259192618s)
	I1209 01:56:27.945501   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.256900352s)
	I1209 01:56:27.945720   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.015777742s)
	I1209 01:56:27.945744   16330 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-598284"
	I1209 01:56:27.946697   16330 out.go:179] * Verifying csi-hostpath-driver addon...
	I1209 01:56:27.946699   16330 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-598284 service yakd-dashboard -n yakd-dashboard
	
	I1209 01:56:27.948916   16330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 01:56:27.951556   16330 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 01:56:27.951575   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:27.954050   16330 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1209 01:56:28.015694   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:28.015778   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:28.277490   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:28.452793   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:28.464832   16330 node_ready.go:57] node "addons-598284" has "Ready":"False" status (will retry)
	I1209 01:56:28.553687   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:28.553750   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:28.951944   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:28.988598   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:28.988625   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:29.452206   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:29.552496   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:29.552579   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:29.952570   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:30.053198   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:30.053292   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:30.451157   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:30.488335   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:30.488347   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:30.714409   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.436873742s)
	I1209 01:56:30.952190   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:30.961700   16330 node_ready.go:57] node "addons-598284" has "Ready":"False" status (will retry)
	I1209 01:56:30.988495   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:30.988680   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:31.452181   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:31.552691   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:31.552841   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:31.951870   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:31.988378   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:31.988601   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:32.452552   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:32.553082   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:32.553259   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:32.952106   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:32.962979   16330 node_ready.go:57] node "addons-598284" has "Ready":"False" status (will retry)
	I1209 01:56:32.988800   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:32.988933   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:33.452165   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:33.553255   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:33.553432   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:33.662389   16330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 01:56:33.662458   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:33.679555   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:33.783717   16330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 01:56:33.795169   16330 addons.go:239] Setting addon gcp-auth=true in "addons-598284"
	I1209 01:56:33.795221   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:33.795563   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:33.812234   16330 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 01:56:33.812275   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:33.828064   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:33.915787   16330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:33.917138   16330 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 01:56:33.918307   16330 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 01:56:33.918323   16330 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 01:56:33.930463   16330 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 01:56:33.930493   16330 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 01:56:33.942027   16330 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 01:56:33.942043   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 01:56:33.952588   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:33.953775   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 01:56:33.989049   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:33.989065   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:34.228752   16330 addons.go:495] Verifying addon gcp-auth=true in "addons-598284"
	I1209 01:56:34.232786   16330 out.go:179] * Verifying gcp-auth addon...
	I1209 01:56:34.234602   16330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 01:56:34.236556   16330 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 01:56:34.236576   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:34.452579   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:34.488364   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:34.488716   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:34.737932   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:34.952429   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:35.054162   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:35.054218   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:35.237841   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:35.451247   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:35.462226   16330 node_ready.go:57] node "addons-598284" has "Ready":"False" status (will retry)
	I1209 01:56:35.488254   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:35.488419   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:35.737536   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:35.952100   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:35.988918   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:35.989205   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:36.237492   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:36.452086   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:36.489112   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:36.489341   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:36.737806   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:36.952602   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:36.988475   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:36.988760   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:37.237157   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:37.451373   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:37.462264   16330 node_ready.go:57] node "addons-598284" has "Ready":"False" status (will retry)
	I1209 01:56:37.488737   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:37.488749   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:37.736891   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:37.951383   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:37.987936   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:37.988135   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:38.236931   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:38.454881   16330 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 01:56:38.454908   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:38.465471   16330 node_ready.go:49] node "addons-598284" is "Ready"
	I1209 01:56:38.465520   16330 node_ready.go:38] duration metric: took 12.00543085s for node "addons-598284" to be "Ready" ...
	I1209 01:56:38.465542   16330 api_server.go:52] waiting for apiserver process to appear ...
	I1209 01:56:38.465686   16330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 01:56:38.489920   16330 api_server.go:72] duration metric: took 12.481658854s to wait for apiserver process to appear ...
	I1209 01:56:38.489945   16330 api_server.go:88] waiting for apiserver healthz status ...
	I1209 01:56:38.489968   16330 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1209 01:56:38.494680   16330 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1209 01:56:38.495649   16330 api_server.go:141] control plane version: v1.34.2
	I1209 01:56:38.495676   16330 api_server.go:131] duration metric: took 5.723129ms to wait for apiserver health ...
	I1209 01:56:38.495687   16330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 01:56:38.553411   16330 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 01:56:38.553433   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:38.553520   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:38.554366   16330 system_pods.go:59] 20 kube-system pods found
	I1209 01:56:38.554392   16330 system_pods.go:61] "amd-gpu-device-plugin-ftp97" [d071cb4a-2605-4817-9fd3-acecc4c70e72] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1209 01:56:38.554401   16330 system_pods.go:61] "coredns-66bc5c9577-fvxpf" [b39f69ad-d9a2-46a8-b50c-f793e1d8ce3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 01:56:38.554412   16330 system_pods.go:61] "csi-hostpath-attacher-0" [e5e6a133-9661-4816-97ff-0ca906b1abf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:38.554423   16330 system_pods.go:61] "csi-hostpath-resizer-0" [c454d7a6-4150-40c9-a864-83dd3ef8127e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:38.554432   16330 system_pods.go:61] "csi-hostpathplugin-c7mht" [d70f30e5-fb6a-4a6f-9ea3-ec6e64f554eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:38.554440   16330 system_pods.go:61] "etcd-addons-598284" [7bd9cadd-a36f-4c76-a597-16350f86d0f3] Running
	I1209 01:56:38.554449   16330 system_pods.go:61] "kindnet-krjk7" [48bd1b60-eff0-408f-92db-9274638be9f7] Running
	I1209 01:56:38.554452   16330 system_pods.go:61] "kube-apiserver-addons-598284" [03a80f84-a979-4ae7-a2cb-80b13bc270cb] Running
	I1209 01:56:38.554457   16330 system_pods.go:61] "kube-controller-manager-addons-598284" [5ae01528-fe82-42ac-9809-948d47276f79] Running
	I1209 01:56:38.554462   16330 system_pods.go:61] "kube-ingress-dns-minikube" [23f18a2c-e473-4fae-9b0d-0bfcaa13dcd6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:38.554470   16330 system_pods.go:61] "kube-proxy-xb9c9" [8998909f-3f57-450c-8953-06e3c7569b20] Running
	I1209 01:56:38.554473   16330 system_pods.go:61] "kube-scheduler-addons-598284" [f1b03e02-897e-4e86-9e32-8961359623fd] Running
	I1209 01:56:38.554478   16330 system_pods.go:61] "metrics-server-85b7d694d7-bzvbq" [9a2defb7-26b5-424b-b49c-b10b47007095] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:38.554485   16330 system_pods.go:61] "nvidia-device-plugin-daemonset-f8kcp" [e06aa1f3-e53b-4643-93bc-b9cd45f4875e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:38.554503   16330 system_pods.go:61] "registry-6b586f9694-g2qp5" [6aced207-434f-45dc-8005-52d4e7307bea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:38.554511   16330 system_pods.go:61] "registry-creds-764b6fb674-25mz9" [f17f8725-46a5-42b4-b1eb-7d839244e156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:38.554523   16330 system_pods.go:61] "registry-proxy-nhhw6" [a92d010f-222e-4542-8e2a-29b8429da13a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:38.554531   16330 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k5rzs" [fd052a67-9536-476b-9440-a6ca436baa1e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.554544   16330 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qg54s" [3d52e377-2796-417c-9ce8-7394adfc19c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.554552   16330 system_pods.go:61] "storage-provisioner" [1fc26aca-04eb-42b6-8ca4-63ea83f534a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 01:56:38.554560   16330 system_pods.go:74] duration metric: took 58.866538ms to wait for pod list to return data ...
	I1209 01:56:38.554570   16330 default_sa.go:34] waiting for default service account to be created ...
	I1209 01:56:38.556370   16330 default_sa.go:45] found service account: "default"
	I1209 01:56:38.556386   16330 default_sa.go:55] duration metric: took 1.810811ms for default service account to be created ...
	I1209 01:56:38.556394   16330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 01:56:38.559115   16330 system_pods.go:86] 20 kube-system pods found
	I1209 01:56:38.559137   16330 system_pods.go:89] "amd-gpu-device-plugin-ftp97" [d071cb4a-2605-4817-9fd3-acecc4c70e72] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1209 01:56:38.559143   16330 system_pods.go:89] "coredns-66bc5c9577-fvxpf" [b39f69ad-d9a2-46a8-b50c-f793e1d8ce3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 01:56:38.559149   16330 system_pods.go:89] "csi-hostpath-attacher-0" [e5e6a133-9661-4816-97ff-0ca906b1abf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:38.559154   16330 system_pods.go:89] "csi-hostpath-resizer-0" [c454d7a6-4150-40c9-a864-83dd3ef8127e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:38.559160   16330 system_pods.go:89] "csi-hostpathplugin-c7mht" [d70f30e5-fb6a-4a6f-9ea3-ec6e64f554eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:38.559169   16330 system_pods.go:89] "etcd-addons-598284" [7bd9cadd-a36f-4c76-a597-16350f86d0f3] Running
	I1209 01:56:38.559173   16330 system_pods.go:89] "kindnet-krjk7" [48bd1b60-eff0-408f-92db-9274638be9f7] Running
	I1209 01:56:38.559177   16330 system_pods.go:89] "kube-apiserver-addons-598284" [03a80f84-a979-4ae7-a2cb-80b13bc270cb] Running
	I1209 01:56:38.559181   16330 system_pods.go:89] "kube-controller-manager-addons-598284" [5ae01528-fe82-42ac-9809-948d47276f79] Running
	I1209 01:56:38.559185   16330 system_pods.go:89] "kube-ingress-dns-minikube" [23f18a2c-e473-4fae-9b0d-0bfcaa13dcd6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:38.559189   16330 system_pods.go:89] "kube-proxy-xb9c9" [8998909f-3f57-450c-8953-06e3c7569b20] Running
	I1209 01:56:38.559192   16330 system_pods.go:89] "kube-scheduler-addons-598284" [f1b03e02-897e-4e86-9e32-8961359623fd] Running
	I1209 01:56:38.559196   16330 system_pods.go:89] "metrics-server-85b7d694d7-bzvbq" [9a2defb7-26b5-424b-b49c-b10b47007095] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:38.559201   16330 system_pods.go:89] "nvidia-device-plugin-daemonset-f8kcp" [e06aa1f3-e53b-4643-93bc-b9cd45f4875e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:38.559207   16330 system_pods.go:89] "registry-6b586f9694-g2qp5" [6aced207-434f-45dc-8005-52d4e7307bea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:38.559220   16330 system_pods.go:89] "registry-creds-764b6fb674-25mz9" [f17f8725-46a5-42b4-b1eb-7d839244e156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:38.559224   16330 system_pods.go:89] "registry-proxy-nhhw6" [a92d010f-222e-4542-8e2a-29b8429da13a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:38.559229   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rzs" [fd052a67-9536-476b-9440-a6ca436baa1e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.559238   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qg54s" [3d52e377-2796-417c-9ce8-7394adfc19c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.559243   16330 system_pods.go:89] "storage-provisioner" [1fc26aca-04eb-42b6-8ca4-63ea83f534a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 01:56:38.559256   16330 retry.go:31] will retry after 281.636363ms: missing components: kube-dns
	I1209 01:56:38.737837   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:38.845465   16330 system_pods.go:86] 20 kube-system pods found
	I1209 01:56:38.845497   16330 system_pods.go:89] "amd-gpu-device-plugin-ftp97" [d071cb4a-2605-4817-9fd3-acecc4c70e72] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1209 01:56:38.845508   16330 system_pods.go:89] "coredns-66bc5c9577-fvxpf" [b39f69ad-d9a2-46a8-b50c-f793e1d8ce3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 01:56:38.845520   16330 system_pods.go:89] "csi-hostpath-attacher-0" [e5e6a133-9661-4816-97ff-0ca906b1abf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:38.845529   16330 system_pods.go:89] "csi-hostpath-resizer-0" [c454d7a6-4150-40c9-a864-83dd3ef8127e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:38.845537   16330 system_pods.go:89] "csi-hostpathplugin-c7mht" [d70f30e5-fb6a-4a6f-9ea3-ec6e64f554eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:38.845541   16330 system_pods.go:89] "etcd-addons-598284" [7bd9cadd-a36f-4c76-a597-16350f86d0f3] Running
	I1209 01:56:38.845546   16330 system_pods.go:89] "kindnet-krjk7" [48bd1b60-eff0-408f-92db-9274638be9f7] Running
	I1209 01:56:38.845553   16330 system_pods.go:89] "kube-apiserver-addons-598284" [03a80f84-a979-4ae7-a2cb-80b13bc270cb] Running
	I1209 01:56:38.845557   16330 system_pods.go:89] "kube-controller-manager-addons-598284" [5ae01528-fe82-42ac-9809-948d47276f79] Running
	I1209 01:56:38.845564   16330 system_pods.go:89] "kube-ingress-dns-minikube" [23f18a2c-e473-4fae-9b0d-0bfcaa13dcd6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:38.845568   16330 system_pods.go:89] "kube-proxy-xb9c9" [8998909f-3f57-450c-8953-06e3c7569b20] Running
	I1209 01:56:38.845574   16330 system_pods.go:89] "kube-scheduler-addons-598284" [f1b03e02-897e-4e86-9e32-8961359623fd] Running
	I1209 01:56:38.845580   16330 system_pods.go:89] "metrics-server-85b7d694d7-bzvbq" [9a2defb7-26b5-424b-b49c-b10b47007095] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:38.845586   16330 system_pods.go:89] "nvidia-device-plugin-daemonset-f8kcp" [e06aa1f3-e53b-4643-93bc-b9cd45f4875e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:38.845595   16330 system_pods.go:89] "registry-6b586f9694-g2qp5" [6aced207-434f-45dc-8005-52d4e7307bea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:38.845604   16330 system_pods.go:89] "registry-creds-764b6fb674-25mz9" [f17f8725-46a5-42b4-b1eb-7d839244e156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:38.845613   16330 system_pods.go:89] "registry-proxy-nhhw6" [a92d010f-222e-4542-8e2a-29b8429da13a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:38.845621   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rzs" [fd052a67-9536-476b-9440-a6ca436baa1e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.845657   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qg54s" [3d52e377-2796-417c-9ce8-7394adfc19c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.845668   16330 system_pods.go:89] "storage-provisioner" [1fc26aca-04eb-42b6-8ca4-63ea83f534a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 01:56:38.845686   16330 retry.go:31] will retry after 343.536778ms: missing components: kube-dns
	I1209 01:56:38.953096   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:38.989664   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:38.989963   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:39.194986   16330 system_pods.go:86] 20 kube-system pods found
	I1209 01:56:39.195024   16330 system_pods.go:89] "amd-gpu-device-plugin-ftp97" [d071cb4a-2605-4817-9fd3-acecc4c70e72] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1209 01:56:39.195032   16330 system_pods.go:89] "coredns-66bc5c9577-fvxpf" [b39f69ad-d9a2-46a8-b50c-f793e1d8ce3b] Running
	I1209 01:56:39.195050   16330 system_pods.go:89] "csi-hostpath-attacher-0" [e5e6a133-9661-4816-97ff-0ca906b1abf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:39.195059   16330 system_pods.go:89] "csi-hostpath-resizer-0" [c454d7a6-4150-40c9-a864-83dd3ef8127e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:39.195073   16330 system_pods.go:89] "csi-hostpathplugin-c7mht" [d70f30e5-fb6a-4a6f-9ea3-ec6e64f554eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:39.195081   16330 system_pods.go:89] "etcd-addons-598284" [7bd9cadd-a36f-4c76-a597-16350f86d0f3] Running
	I1209 01:56:39.195088   16330 system_pods.go:89] "kindnet-krjk7" [48bd1b60-eff0-408f-92db-9274638be9f7] Running
	I1209 01:56:39.195104   16330 system_pods.go:89] "kube-apiserver-addons-598284" [03a80f84-a979-4ae7-a2cb-80b13bc270cb] Running
	I1209 01:56:39.195110   16330 system_pods.go:89] "kube-controller-manager-addons-598284" [5ae01528-fe82-42ac-9809-948d47276f79] Running
	I1209 01:56:39.195119   16330 system_pods.go:89] "kube-ingress-dns-minikube" [23f18a2c-e473-4fae-9b0d-0bfcaa13dcd6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:39.195124   16330 system_pods.go:89] "kube-proxy-xb9c9" [8998909f-3f57-450c-8953-06e3c7569b20] Running
	I1209 01:56:39.195130   16330 system_pods.go:89] "kube-scheduler-addons-598284" [f1b03e02-897e-4e86-9e32-8961359623fd] Running
	I1209 01:56:39.195137   16330 system_pods.go:89] "metrics-server-85b7d694d7-bzvbq" [9a2defb7-26b5-424b-b49c-b10b47007095] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:39.195152   16330 system_pods.go:89] "nvidia-device-plugin-daemonset-f8kcp" [e06aa1f3-e53b-4643-93bc-b9cd45f4875e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:39.195159   16330 system_pods.go:89] "registry-6b586f9694-g2qp5" [6aced207-434f-45dc-8005-52d4e7307bea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:39.195167   16330 system_pods.go:89] "registry-creds-764b6fb674-25mz9" [f17f8725-46a5-42b4-b1eb-7d839244e156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:39.195174   16330 system_pods.go:89] "registry-proxy-nhhw6" [a92d010f-222e-4542-8e2a-29b8429da13a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:39.195181   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rzs" [fd052a67-9536-476b-9440-a6ca436baa1e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:39.195201   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qg54s" [3d52e377-2796-417c-9ce8-7394adfc19c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:39.195212   16330 system_pods.go:89] "storage-provisioner" [1fc26aca-04eb-42b6-8ca4-63ea83f534a5] Running
	I1209 01:56:39.195223   16330 system_pods.go:126] duration metric: took 638.823026ms to wait for k8s-apps to be running ...
	I1209 01:56:39.195233   16330 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 01:56:39.195288   16330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 01:56:39.270735   16330 system_svc.go:56] duration metric: took 75.492081ms WaitForService to wait for kubelet
	I1209 01:56:39.270770   16330 kubeadm.go:587] duration metric: took 13.262513608s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 01:56:39.270794   16330 node_conditions.go:102] verifying NodePressure condition ...
	I1209 01:56:39.275784   16330 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 01:56:39.275812   16330 node_conditions.go:123] node cpu capacity is 8
	I1209 01:56:39.275829   16330 node_conditions.go:105] duration metric: took 5.028338ms to run NodePressure ...
	I1209 01:56:39.275842   16330 start.go:242] waiting for startup goroutines ...
	I1209 01:56:39.294467   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:39.453297   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:39.490760   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:39.491070   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:39.738465   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:39.953598   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:39.989160   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:39.989233   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:40.238387   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:40.452838   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:40.489192   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:40.489227   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:40.738461   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:40.952678   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:40.989532   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:40.989703   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:41.238143   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:41.452292   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:41.489735   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:41.489794   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:41.737664   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:41.953050   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:41.989688   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:41.989733   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:42.237319   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:42.452428   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:42.488727   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:42.488897   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:42.738090   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:42.951558   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:42.988677   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:42.988761   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:43.236831   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:43.451828   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:43.488963   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:43.489009   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:43.739347   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:43.954203   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:43.990499   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:43.991503   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:44.238531   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:44.452957   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:44.489579   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:44.489619   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:44.737897   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:44.952158   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:44.989851   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:44.989856   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:45.237446   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:45.452453   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:45.488779   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:45.488815   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:45.737070   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:45.951952   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:45.989573   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:45.989776   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:46.237962   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:46.452341   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:46.488681   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:46.488765   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:46.737795   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:46.952727   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:46.995441   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:46.995466   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:47.237990   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:47.452234   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:47.488253   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:47.488392   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:47.738092   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:47.952767   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:47.989557   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:47.989989   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:48.237974   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:48.452596   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:48.489052   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:48.489161   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:48.738119   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:48.952197   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:48.989850   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:48.989883   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:49.237786   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:49.452437   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:49.488285   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:49.488356   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:49.737906   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:49.952512   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:49.989219   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:49.989329   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:50.238499   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:50.452357   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:50.553005   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:50.553136   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:50.738239   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:50.952557   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:50.989085   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:50.989256   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:51.238002   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:51.451865   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:51.488731   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:51.488861   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:51.737405   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:51.952591   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:51.989104   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:51.989130   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:52.237603   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:52.452325   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:52.488174   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:52.488191   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:52.738127   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:52.951907   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:52.989621   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:52.989830   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:53.236884   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:53.451838   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:53.488827   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:53.488855   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:53.737602   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:53.952325   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:53.988372   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:53.988419   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:54.238038   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:54.451993   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:54.488806   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:54.488932   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:54.738161   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:54.952322   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:54.989580   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:54.989580   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.312532   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:55.452729   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:55.489654   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.489730   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:55.737169   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:55.952294   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:55.988840   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.988847   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:56.238155   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:56.452363   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:56.552388   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:56.552444   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:56.737660   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:56.952424   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:56.988414   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:56.988482   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:57.237938   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:57.452451   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:57.488862   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:57.488904   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:57.737870   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:57.952590   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:57.989291   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:57.989727   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:58.237795   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:58.452705   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:58.488571   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:58.488712   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:58.737837   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:58.951491   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:58.988555   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:58.988648   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:59.237335   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:59.452721   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:59.490168   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:59.490210   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:59.738552   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:59.952420   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:59.989011   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:59.989056   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:00.237886   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:00.451796   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:00.488898   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:00.488943   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:00.737973   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:00.952059   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:00.989601   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:00.989618   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:01.237206   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:01.451972   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:01.488761   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:01.488773   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:01.737216   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:01.952358   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:01.988899   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:01.989077   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:02.237117   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:02.451735   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:02.488816   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:02.488843   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:02.738267   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:02.952463   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:02.988862   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:02.988925   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:03.238222   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:03.452317   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:03.552543   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:03.552764   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:03.737393   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:03.952363   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:03.990790   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:03.991156   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:04.239666   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:04.452975   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:04.489335   16330 kapi.go:107] duration metric: took 37.003247964s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 01:57:04.489596   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:04.738356   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:04.953339   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:04.990041   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:05.283366   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:05.452693   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:05.489114   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:05.737724   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:05.952624   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:05.988596   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:06.237054   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:06.452271   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:06.490435   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:06.738092   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:06.952327   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:06.989781   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:07.237619   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:07.455379   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:07.490544   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:07.738022   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:07.952623   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:07.989415   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:08.237945   16330 kapi.go:107] duration metric: took 34.003339464s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 01:57:08.239358   16330 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-598284 cluster.
	I1209 01:57:08.240451   16330 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 01:57:08.241547   16330 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 01:57:08.452116   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:08.578926   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:08.953484   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:08.988951   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:09.452596   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:09.488969   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:09.952475   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:09.990027   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:10.452409   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:10.489538   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:10.952027   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:10.989046   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:11.452239   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:11.552916   16330 kapi.go:107] duration metric: took 44.066665843s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 01:57:11.952720   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:12.452738   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:12.952601   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:13.452518   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:13.952117   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:14.453167   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:14.952492   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:15.452764   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:15.951978   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:16.451966   16330 kapi.go:107] duration metric: took 48.503047854s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 01:57:16.453544   16330 out.go:179] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, inspektor-gadget, nvidia-device-plugin, ingress-dns, registry-creds, metrics-server, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1209 01:57:16.454689   16330 addons.go:530] duration metric: took 50.446402988s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin inspektor-gadget nvidia-device-plugin ingress-dns registry-creds metrics-server cloud-spanner yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1209 01:57:16.454722   16330 start.go:247] waiting for cluster config update ...
	I1209 01:57:16.454740   16330 start.go:256] writing updated cluster config ...
	I1209 01:57:16.454975   16330 ssh_runner.go:195] Run: rm -f paused
	I1209 01:57:16.458791   16330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 01:57:16.461371   16330 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fvxpf" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.464926   16330 pod_ready.go:94] pod "coredns-66bc5c9577-fvxpf" is "Ready"
	I1209 01:57:16.464947   16330 pod_ready.go:86] duration metric: took 3.557534ms for pod "coredns-66bc5c9577-fvxpf" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.466623   16330 pod_ready.go:83] waiting for pod "etcd-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.469560   16330 pod_ready.go:94] pod "etcd-addons-598284" is "Ready"
	I1209 01:57:16.469581   16330 pod_ready.go:86] duration metric: took 2.92871ms for pod "etcd-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.471088   16330 pod_ready.go:83] waiting for pod "kube-apiserver-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.473980   16330 pod_ready.go:94] pod "kube-apiserver-addons-598284" is "Ready"
	I1209 01:57:16.473998   16330 pod_ready.go:86] duration metric: took 2.89148ms for pod "kube-apiserver-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.475443   16330 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.862383   16330 pod_ready.go:94] pod "kube-controller-manager-addons-598284" is "Ready"
	I1209 01:57:16.862408   16330 pod_ready.go:86] duration metric: took 386.947982ms for pod "kube-controller-manager-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:17.062164   16330 pod_ready.go:83] waiting for pod "kube-proxy-xb9c9" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:17.463493   16330 pod_ready.go:94] pod "kube-proxy-xb9c9" is "Ready"
	I1209 01:57:17.463520   16330 pod_ready.go:86] duration metric: took 401.33333ms for pod "kube-proxy-xb9c9" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:17.662244   16330 pod_ready.go:83] waiting for pod "kube-scheduler-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:18.061837   16330 pod_ready.go:94] pod "kube-scheduler-addons-598284" is "Ready"
	I1209 01:57:18.061861   16330 pod_ready.go:86] duration metric: took 399.594577ms for pod "kube-scheduler-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:18.061872   16330 pod_ready.go:40] duration metric: took 1.603058936s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 01:57:18.104702   16330 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 01:57:18.106558   16330 out.go:179] * Done! kubectl is now configured to use "addons-598284" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 01:58:28 addons-598284 crio[769]: time="2025-12-09T01:58:28.945597947Z" level=info msg="Stopped container 4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b: default/task-pv-pod-restore/task-pv-container" id=799754ba-0f98-4d7e-87ec-bf71d1c93fc3 name=/runtime.v1.RuntimeService/StopContainer
	Dec 09 01:58:28 addons-598284 crio[769]: time="2025-12-09T01:58:28.946149774Z" level=info msg="Stopping pod sandbox: cc3e692d68927399ca4b889c21b896abdbcfeeaec3aa5952b16e15756c1bbcf6" id=315cd6dd-15cb-4fa5-933b-b9f3641188c8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 01:58:28 addons-598284 crio[769]: time="2025-12-09T01:58:28.94637093Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:cc3e692d68927399ca4b889c21b896abdbcfeeaec3aa5952b16e15756c1bbcf6 UID:38e1fc2d-3648-4c3f-9e1a-c90a7da5b030 NetNS:/var/run/netns/01398f1b-3728-44b2-b2fc-efb45978b2ce Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000382358}] Aliases:map[]}"
	Dec 09 01:58:28 addons-598284 crio[769]: time="2025-12-09T01:58:28.946489226Z" level=info msg="Deleting pod default_task-pv-pod-restore from CNI network \"kindnet\" (type=ptp)"
	Dec 09 01:58:28 addons-598284 crio[769]: time="2025-12-09T01:58:28.965887113Z" level=info msg="Stopped pod sandbox: cc3e692d68927399ca4b889c21b896abdbcfeeaec3aa5952b16e15756c1bbcf6" id=315cd6dd-15cb-4fa5-933b-b9f3641188c8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 01:58:29 addons-598284 crio[769]: time="2025-12-09T01:58:29.440703576Z" level=info msg="Removing container: 4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b" id=dc26ee34-f6c0-471d-bf53-a2018a4b1f8b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 01:58:29 addons-598284 crio[769]: time="2025-12-09T01:58:29.447365796Z" level=info msg="Removed container 4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b: default/task-pv-pod-restore/task-pv-container" id=dc26ee34-f6c0-471d-bf53-a2018a4b1f8b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 01:59:19 addons-598284 crio[769]: time="2025-12-09T01:59:19.942493013Z" level=info msg="Stopping pod sandbox: cc3e692d68927399ca4b889c21b896abdbcfeeaec3aa5952b16e15756c1bbcf6" id=35b7d92c-eee2-4433-8332-dc4e254a1977 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 01:59:19 addons-598284 crio[769]: time="2025-12-09T01:59:19.942543262Z" level=info msg="Stopped pod sandbox (already stopped): cc3e692d68927399ca4b889c21b896abdbcfeeaec3aa5952b16e15756c1bbcf6" id=35b7d92c-eee2-4433-8332-dc4e254a1977 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 01:59:19 addons-598284 crio[769]: time="2025-12-09T01:59:19.942890624Z" level=info msg="Removing pod sandbox: cc3e692d68927399ca4b889c21b896abdbcfeeaec3aa5952b16e15756c1bbcf6" id=44ffb991-bb7e-4dcb-9c92-9ede7bb6e510 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 01:59:19 addons-598284 crio[769]: time="2025-12-09T01:59:19.945762849Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 01:59:19 addons-598284 crio[769]: time="2025-12-09T01:59:19.945817898Z" level=info msg="Removed pod sandbox: cc3e692d68927399ca4b889c21b896abdbcfeeaec3aa5952b16e15756c1bbcf6" id=44ffb991-bb7e-4dcb-9c92-9ede7bb6e510 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 02:00:00 addons-598284 crio[769]: time="2025-12-09T02:00:00.996338183Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-kwdnj/POD" id=25e14fca-14b1-42ab-af5b-ac5f7603a4cb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:00:00 addons-598284 crio[769]: time="2025-12-09T02:00:00.996404149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.002443097Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kwdnj Namespace:default ID:f0f8c9be0ea38f0ddecd46bcc64cd7b5894b64a19b8705a0a01f1fd53e04ea34 UID:2dc4da31-44f3-4318-9597-015e5c9a496b NetNS:/var/run/netns/35ec513c-f3ea-4236-9bfd-84abb3fb92c9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00040efa8}] Aliases:map[]}"
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.002477385Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-kwdnj to CNI network \"kindnet\" (type=ptp)"
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.012775097Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kwdnj Namespace:default ID:f0f8c9be0ea38f0ddecd46bcc64cd7b5894b64a19b8705a0a01f1fd53e04ea34 UID:2dc4da31-44f3-4318-9597-015e5c9a496b NetNS:/var/run/netns/35ec513c-f3ea-4236-9bfd-84abb3fb92c9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00040efa8}] Aliases:map[]}"
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.012900791Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-kwdnj for CNI network kindnet (type=ptp)"
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.013617301Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.014368301Z" level=info msg="Ran pod sandbox f0f8c9be0ea38f0ddecd46bcc64cd7b5894b64a19b8705a0a01f1fd53e04ea34 with infra container: default/hello-world-app-5d498dc89-kwdnj/POD" id=25e14fca-14b1-42ab-af5b-ac5f7603a4cb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.015409199Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b92b1d60-84bf-4811-b700-9528f78d7867 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.01553706Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=b92b1d60-84bf-4811-b700-9528f78d7867 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.015580689Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=b92b1d60-84bf-4811-b700-9528f78d7867 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.016185059Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=1db726fd-e6a5-4f9c-a5b2-75f9c745f9d2 name=/runtime.v1.ImageService/PullImage
	Dec 09 02:00:01 addons-598284 crio[769]: time="2025-12-09T02:00:01.020921199Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	6755ec162936c       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago       Running             registry-creds                           0                   1b1805f9d40d8       registry-creds-764b6fb674-25mz9             kube-system
	24a0e8d0103ac       public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9                                           2 minutes ago       Running             nginx                                    0                   524e835ef307f       nginx                                       default
	66e46ba7618c8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago       Running             busybox                                  0                   3ec98b67aa6dd       busybox                                     default
	0cf8359e032c5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago       Running             csi-snapshotter                          0                   bdb44500bcec8       csi-hostpathplugin-c7mht                    kube-system
	09ee3d53e0739       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago       Running             csi-provisioner                          0                   bdb44500bcec8       csi-hostpathplugin-c7mht                    kube-system
	07327f304dd6a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago       Running             liveness-probe                           0                   bdb44500bcec8       csi-hostpathplugin-c7mht                    kube-system
	cd4e7d6b980f0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago       Running             hostpath                                 0                   bdb44500bcec8       csi-hostpathplugin-c7mht                    kube-system
	d32117de7c58e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago       Running             node-driver-registrar                    0                   bdb44500bcec8       csi-hostpathplugin-c7mht                    kube-system
	165b4491c3f96       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago       Running             controller                               0                   3f66552b6789e       ingress-nginx-controller-85d4c799dd-rkcmx   ingress-nginx
	ca4804ab9c48e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago       Running             gcp-auth                                 0                   f2c049d533745       gcp-auth-78565c9fb4-sg5ff                   gcp-auth
	79d771e63df5c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago       Running             gadget                                   0                   a5e553c981796       gadget-cwdlx                                gadget
	a22b2817d5b76       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago       Running             registry-proxy                           0                   799829e163f51       registry-proxy-nhhw6                        kube-system
	9259d8cba23be       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago       Running             amd-gpu-device-plugin                    0                   5a34b34bb6e0d       amd-gpu-device-plugin-ftp97                 kube-system
	58565aa6aebcd       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   741b0f505f2bb       nvidia-device-plugin-daemonset-f8kcp        kube-system
	258a6b06d27dc       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago       Running             registry                                 0                   6f17461cfb8ee       registry-6b586f9694-g2qp5                   kube-system
	4f60883937b8b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   bdb44500bcec8       csi-hostpathplugin-c7mht                    kube-system
	99770ac31d147       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   394070b11927c       csi-hostpath-resizer-0                      kube-system
	6ad8e94399619       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago       Exited              patch                                    0                   c81998943be09       ingress-nginx-admission-patch-xg4qv         ingress-nginx
	a8861dac6b035       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   c5fa041bea91d       snapshot-controller-7d9fbc56b8-qg54s        kube-system
	ebb1b92540ca1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago       Running             local-path-provisioner                   0                   7ade1a0238a7d       local-path-provisioner-648f6765c9-r5jbl     local-path-storage
	c222dc3a3f279       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   7acf650818914       snapshot-controller-7d9fbc56b8-k5rzs        kube-system
	7a1b1e01077e4       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   053772d238c2b       csi-hostpath-attacher-0                     kube-system
	60c37a2cb1cdc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago       Exited              create                                   0                   c7c6d2d419c1e       ingress-nginx-admission-create-sqg9m        ingress-nginx
	e745cbc0143b0       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago       Running             yakd                                     0                   6a4e9342b846a       yakd-dashboard-5ff678cb9-kgfgf              yakd-dashboard
	69b827fe1bc6e       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago       Running             metrics-server                           0                   240db2311777a       metrics-server-85b7d694d7-bzvbq             kube-system
	acdbe88ce48d7       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago       Running             cloud-spanner-emulator                   0                   52fedd4d42bdf       cloud-spanner-emulator-5bdddb765-tqgf8      default
	af1bbcbd5b2e7       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago       Running             minikube-ingress-dns                     0                   8094382ecbf41       kube-ingress-dns-minikube                   kube-system
	2c82ba2d18c01       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago       Running             coredns                                  0                   9d7aff926c80b       coredns-66bc5c9577-fvxpf                    kube-system
	c21d5137f49f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago       Running             storage-provisioner                      0                   243f2fe79eb5a       storage-provisioner                         kube-system
	ea6bd4352d85a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago       Running             kindnet-cni                              0                   ae8f3e789172f       kindnet-krjk7                               kube-system
	c951a1040b335       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             3 minutes ago       Running             kube-proxy                               0                   ad04bcae3ea80       kube-proxy-xb9c9                            kube-system
	40e0aceab5999       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             3 minutes ago       Running             etcd                                     0                   72a153d634b9a       etcd-addons-598284                          kube-system
	49c6272ba70f5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             3 minutes ago       Running             kube-controller-manager                  0                   6fbc6f9099273       kube-controller-manager-addons-598284       kube-system
	16e2e43c2d88b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             3 minutes ago       Running             kube-apiserver                           0                   59f4b03e8372b       kube-apiserver-addons-598284                kube-system
	b5bddb335ebc6       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             3 minutes ago       Running             kube-scheduler                           0                   4eefecc11ae00       kube-scheduler-addons-598284                kube-system
	
	
	==> coredns [2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7] <==
	[INFO] 10.244.0.22:43057 - 7051 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000204938s
	[INFO] 10.244.0.22:38292 - 29605 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00477914s
	[INFO] 10.244.0.22:54098 - 27242 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007710524s
	[INFO] 10.244.0.22:59616 - 35348 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004680096s
	[INFO] 10.244.0.22:59771 - 44539 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005136327s
	[INFO] 10.244.0.22:44524 - 44485 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005144146s
	[INFO] 10.244.0.22:49118 - 19164 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005423151s
	[INFO] 10.244.0.22:41416 - 7458 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001099238s
	[INFO] 10.244.0.22:38737 - 27293 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001478006s
	[INFO] 10.244.0.24:33122 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000166486s
	[INFO] 10.244.0.24:43231 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000181565s
	[INFO] 10.244.0.27:59010 - 61812 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000204467s
	[INFO] 10.244.0.27:45642 - 17494 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000276485s
	[INFO] 10.244.0.27:56028 - 28173 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000113807s
	[INFO] 10.244.0.27:50361 - 58261 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000178461s
	[INFO] 10.244.0.27:35866 - 59692 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00010443s
	[INFO] 10.244.0.27:35995 - 48696 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000107362s
	[INFO] 10.244.0.27:46512 - 43215 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004710866s
	[INFO] 10.244.0.27:40268 - 44651 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004850175s
	[INFO] 10.244.0.27:34827 - 19453 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.00398877s
	[INFO] 10.244.0.27:44717 - 35325 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004126066s
	[INFO] 10.244.0.27:57367 - 8187 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.003528663s
	[INFO] 10.244.0.27:60669 - 62552 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004470312s
	[INFO] 10.244.0.27:38737 - 6843 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001668657s
	[INFO] 10.244.0.27:39463 - 7871 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001812373s
	
	
	==> describe nodes <==
	Name:               addons-598284
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-598284
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=addons-598284
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T01_56_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-598284
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-598284"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 01:56:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-598284
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 01:59:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 01:57:51 +0000   Tue, 09 Dec 2025 01:56:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 01:57:51 +0000   Tue, 09 Dec 2025 01:56:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 01:57:51 +0000   Tue, 09 Dec 2025 01:56:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 01:57:51 +0000   Tue, 09 Dec 2025 01:56:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-598284
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                418097e5-e43f-4ca7-be60-ac2cb9fae4ef
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  default                     cloud-spanner-emulator-5bdddb765-tqgf8       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  default                     hello-world-app-5d498dc89-kwdnj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-cwdlx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  gcp-auth                    gcp-auth-78565c9fb4-sg5ff                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-rkcmx    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m35s
	  kube-system                 amd-gpu-device-plugin-ftp97                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  kube-system                 coredns-66bc5c9577-fvxpf                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m37s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 csi-hostpathplugin-c7mht                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  kube-system                 etcd-addons-598284                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m43s
	  kube-system                 kindnet-krjk7                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m38s
	  kube-system                 kube-apiserver-addons-598284                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 kube-controller-manager-addons-598284        200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 kube-proxy-xb9c9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 kube-scheduler-addons-598284                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 metrics-server-85b7d694d7-bzvbq              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m35s
	  kube-system                 nvidia-device-plugin-daemonset-f8kcp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  kube-system                 registry-6b586f9694-g2qp5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 registry-creds-764b6fb674-25mz9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 registry-proxy-nhhw6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  kube-system                 snapshot-controller-7d9fbc56b8-k5rzs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 snapshot-controller-7d9fbc56b8-qg54s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  local-path-storage          local-path-provisioner-648f6765c9-r5jbl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-kgfgf               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m34s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m46s (x8 over 3m47s)  kubelet          Node addons-598284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m46s (x8 over 3m47s)  kubelet          Node addons-598284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m46s (x8 over 3m47s)  kubelet          Node addons-598284 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m43s                  kubelet          Node addons-598284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s                  kubelet          Node addons-598284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s                  kubelet          Node addons-598284 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m38s                  node-controller  Node addons-598284 event: Registered Node addons-598284 in Controller
	  Normal  NodeReady                3m24s                  kubelet          Node addons-598284 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7] <==
	{"level":"warn","ts":"2025-12-09T01:56:17.168249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.176790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.183725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.189959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.197888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.204629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.211127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.218620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.227312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.233791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.239954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.248963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.256010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.270878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.276943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.282839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.323569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:28.485556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:28.492118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:52.102855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:52.109519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:52.123623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:52.129739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39396","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T01:56:55.310985Z","caller":"traceutil/trace.go:172","msg":"trace[399212708] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"101.046943ms","start":"2025-12-09T01:56:55.209907Z","end":"2025-12-09T01:56:55.310954Z","steps":["trace[399212708] 'process raft request'  (duration: 98.769036ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:08.840054Z","caller":"traceutil/trace.go:172","msg":"trace[1052877146] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"133.518408ms","start":"2025-12-09T01:57:08.706509Z","end":"2025-12-09T01:57:08.840027Z","steps":["trace[1052877146] 'process raft request'  (duration: 133.40629ms)"],"step_count":1}
	
	
	==> gcp-auth [ca4804ab9c48e4ef7ece366b97cb88a4d8b446a3a1009efba581332bfacc94e8] <==
	2025/12/09 01:57:07 GCP Auth Webhook started!
	2025/12/09 01:57:18 Ready to marshal response ...
	2025/12/09 01:57:18 Ready to write response ...
	2025/12/09 01:57:18 Ready to marshal response ...
	2025/12/09 01:57:18 Ready to write response ...
	2025/12/09 01:57:18 Ready to marshal response ...
	2025/12/09 01:57:18 Ready to write response ...
	2025/12/09 01:57:36 Ready to marshal response ...
	2025/12/09 01:57:36 Ready to write response ...
	2025/12/09 01:57:38 Ready to marshal response ...
	2025/12/09 01:57:38 Ready to write response ...
	2025/12/09 01:57:39 Ready to marshal response ...
	2025/12/09 01:57:39 Ready to write response ...
	2025/12/09 01:57:39 Ready to marshal response ...
	2025/12/09 01:57:39 Ready to write response ...
	2025/12/09 01:57:46 Ready to marshal response ...
	2025/12/09 01:57:46 Ready to write response ...
	2025/12/09 01:57:54 Ready to marshal response ...
	2025/12/09 01:57:54 Ready to write response ...
	2025/12/09 01:58:22 Ready to marshal response ...
	2025/12/09 01:58:22 Ready to write response ...
	2025/12/09 02:00:00 Ready to marshal response ...
	2025/12/09 02:00:00 Ready to write response ...
	
	
	==> kernel <==
	 02:00:02 up 42 min,  0 user,  load average: 0.70, 0.76, 0.37
	Linux addons-598284 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd] <==
	I1209 01:57:57.484254       1 main.go:301] handling current node
	I1209 01:58:07.484199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:58:07.484225       1 main.go:301] handling current node
	I1209 01:58:17.485122       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:58:17.485149       1 main.go:301] handling current node
	I1209 01:58:27.483559       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:58:27.483592       1 main.go:301] handling current node
	I1209 01:58:37.485170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:58:37.485204       1 main.go:301] handling current node
	I1209 01:58:47.492158       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:58:47.492185       1 main.go:301] handling current node
	I1209 01:58:57.492287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:58:57.492329       1 main.go:301] handling current node
	I1209 01:59:07.490103       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:59:07.490130       1 main.go:301] handling current node
	I1209 01:59:17.483342       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:59:17.483378       1 main.go:301] handling current node
	I1209 01:59:27.492482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:59:27.492509       1 main.go:301] handling current node
	I1209 01:59:37.483303       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:59:37.483337       1 main.go:301] handling current node
	I1209 01:59:47.483307       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:59:47.483335       1 main.go:301] handling current node
	I1209 01:59:57.485749       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:59:57.485780       1 main.go:301] handling current node
	
	
	==> kube-apiserver [16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5] <==
	E1209 01:56:38.045516       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.117.172:443: connect: connection refused" logger="UnhandledError"
	W1209 01:56:38.059810       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.117.172:443: connect: connection refused
	E1209 01:56:38.060496       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.117.172:443: connect: connection refused" logger="UnhandledError"
	W1209 01:56:38.064297       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.117.172:443: connect: connection refused
	E1209 01:56:38.064331       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.117.172:443: connect: connection refused" logger="UnhandledError"
	E1209 01:56:48.018529       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.171.29:443: connect: connection refused" logger="UnhandledError"
	W1209 01:56:48.018903       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 01:56:48.018985       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1209 01:56:48.019325       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.171.29:443: connect: connection refused" logger="UnhandledError"
	E1209 01:56:48.024120       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.171.29:443: connect: connection refused" logger="UnhandledError"
	E1209 01:56:48.044681       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.171.29:443: connect: connection refused" logger="UnhandledError"
	I1209 01:56:48.118923       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1209 01:56:52.102797       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 01:56:52.109479       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 01:56:52.123561       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 01:56:52.129730       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1209 01:57:25.779350       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37064: use of closed network connection
	E1209 01:57:25.917242       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37106: use of closed network connection
	I1209 01:57:37.910952       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 01:57:38.082874       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.6.223"}
	I1209 01:58:00.493267       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 02:00:00.760787       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.182.32"}
	
	
	==> kube-controller-manager [49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179] <==
	I1209 01:56:24.725777       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 01:56:24.725776       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1209 01:56:24.725822       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 01:56:24.725877       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 01:56:24.726025       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 01:56:24.727070       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 01:56:24.727103       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 01:56:24.728175       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 01:56:24.729250       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1209 01:56:24.730400       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 01:56:24.730411       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 01:56:24.730457       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1209 01:56:24.734671       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1209 01:56:24.735791       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 01:56:24.738999       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1209 01:56:24.744211       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1209 01:56:24.747441       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 01:56:24.749580       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	E1209 01:56:27.223246       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1209 01:56:39.727837       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1209 01:56:54.741181       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1209 01:56:54.741237       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1209 01:56:54.755045       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1209 01:56:54.842269       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 01:56:54.855574       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6] <==
	I1209 01:56:26.953375       1 server_linux.go:53] "Using iptables proxy"
	I1209 01:56:27.102590       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 01:56:27.205700       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 01:56:27.205738       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1209 01:56:27.205808       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 01:56:27.271823       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 01:56:27.272821       1 server_linux.go:132] "Using iptables Proxier"
	I1209 01:56:27.298466       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 01:56:27.305800       1 server.go:527] "Version info" version="v1.34.2"
	I1209 01:56:27.305836       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 01:56:27.312025       1 config.go:200] "Starting service config controller"
	I1209 01:56:27.312056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 01:56:27.312393       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 01:56:27.312418       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 01:56:27.312443       1 config.go:106] "Starting endpoint slice config controller"
	I1209 01:56:27.312449       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 01:56:27.312702       1 config.go:309] "Starting node config controller"
	I1209 01:56:27.312807       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 01:56:27.413327       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 01:56:27.413352       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 01:56:27.413362       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 01:56:27.413631       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d] <==
	E1209 01:56:17.756762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 01:56:17.756808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 01:56:17.756879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 01:56:17.756927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 01:56:17.756972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 01:56:17.756973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 01:56:17.757006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 01:56:17.757004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 01:56:17.757081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 01:56:17.757099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 01:56:17.757105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 01:56:18.582747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 01:56:18.675007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 01:56:18.689922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 01:56:18.739005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 01:56:18.795861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 01:56:18.851745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 01:56:18.853495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 01:56:18.863440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 01:56:18.867387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 01:56:18.872181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 01:56:18.877985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1209 01:56:18.899963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 01:56:18.952860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1209 01:56:19.154601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 01:58:22 addons-598284 kubelet[1282]: I1209 01:58:22.948497    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ggq\" (UniqueName: \"kubernetes.io/projected/38e1fc2d-3648-4c3f-9e1a-c90a7da5b030-kube-api-access-d8ggq\") pod \"task-pv-pod-restore\" (UID: \"38e1fc2d-3648-4c3f-9e1a-c90a7da5b030\") " pod="default/task-pv-pod-restore"
	Dec 09 01:58:23 addons-598284 kubelet[1282]: I1209 01:58:23.054591    1282 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-8f0d44fd-2685-4e60-bedb-5e51f397c8e2\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^8ab9cd51-d4a2-11f0-8ac3-ea5b552d1b41\") pod \"task-pv-pod-restore\" (UID: \"38e1fc2d-3648-4c3f-9e1a-c90a7da5b030\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/d0245da591846e0f7b3c8672d230d253448b09608d00ca49dda0d469d88b75b8/globalmount\"" pod="default/task-pv-pod-restore"
	Dec 09 01:58:28 addons-598284 kubelet[1282]: I1209 01:58:28.836042    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=6.836016159 podStartE2EDuration="6.836016159s" podCreationTimestamp="2025-12-09 01:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 01:58:23.427203867 +0000 UTC m=+123.619547146" watchObservedRunningTime="2025-12-09 01:58:28.836016159 +0000 UTC m=+129.028359438"
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.089245    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^8ab9cd51-d4a2-11f0-8ac3-ea5b552d1b41\") pod \"38e1fc2d-3648-4c3f-9e1a-c90a7da5b030\" (UID: \"38e1fc2d-3648-4c3f-9e1a-c90a7da5b030\") "
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.089297    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8ggq\" (UniqueName: \"kubernetes.io/projected/38e1fc2d-3648-4c3f-9e1a-c90a7da5b030-kube-api-access-d8ggq\") pod \"38e1fc2d-3648-4c3f-9e1a-c90a7da5b030\" (UID: \"38e1fc2d-3648-4c3f-9e1a-c90a7da5b030\") "
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.089347    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/38e1fc2d-3648-4c3f-9e1a-c90a7da5b030-gcp-creds\") pod \"38e1fc2d-3648-4c3f-9e1a-c90a7da5b030\" (UID: \"38e1fc2d-3648-4c3f-9e1a-c90a7da5b030\") "
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.089479    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38e1fc2d-3648-4c3f-9e1a-c90a7da5b030-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "38e1fc2d-3648-4c3f-9e1a-c90a7da5b030" (UID: "38e1fc2d-3648-4c3f-9e1a-c90a7da5b030"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.091551    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e1fc2d-3648-4c3f-9e1a-c90a7da5b030-kube-api-access-d8ggq" (OuterVolumeSpecName: "kube-api-access-d8ggq") pod "38e1fc2d-3648-4c3f-9e1a-c90a7da5b030" (UID: "38e1fc2d-3648-4c3f-9e1a-c90a7da5b030"). InnerVolumeSpecName "kube-api-access-d8ggq". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.092812    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^8ab9cd51-d4a2-11f0-8ac3-ea5b552d1b41" (OuterVolumeSpecName: "task-pv-storage") pod "38e1fc2d-3648-4c3f-9e1a-c90a7da5b030" (UID: "38e1fc2d-3648-4c3f-9e1a-c90a7da5b030"). InnerVolumeSpecName "pvc-8f0d44fd-2685-4e60-bedb-5e51f397c8e2". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.190710    1282 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/38e1fc2d-3648-4c3f-9e1a-c90a7da5b030-gcp-creds\") on node \"addons-598284\" DevicePath \"\""
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.190762    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-8f0d44fd-2685-4e60-bedb-5e51f397c8e2\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^8ab9cd51-d4a2-11f0-8ac3-ea5b552d1b41\") on node \"addons-598284\" "
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.190780    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d8ggq\" (UniqueName: \"kubernetes.io/projected/38e1fc2d-3648-4c3f-9e1a-c90a7da5b030-kube-api-access-d8ggq\") on node \"addons-598284\" DevicePath \"\""
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.194926    1282 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-8f0d44fd-2685-4e60-bedb-5e51f397c8e2" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^8ab9cd51-d4a2-11f0-8ac3-ea5b552d1b41") on node "addons-598284"
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.291393    1282 reconciler_common.go:299] "Volume detached for volume \"pvc-8f0d44fd-2685-4e60-bedb-5e51f397c8e2\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^8ab9cd51-d4a2-11f0-8ac3-ea5b552d1b41\") on node \"addons-598284\" DevicePath \"\""
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.439360    1282 scope.go:117] "RemoveContainer" containerID="4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b"
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.447546    1282 scope.go:117] "RemoveContainer" containerID="4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b"
	Dec 09 01:58:29 addons-598284 kubelet[1282]: E1209 01:58:29.447898    1282 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b\": container with ID starting with 4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b not found: ID does not exist" containerID="4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b"
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.447931    1282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b"} err="failed to get container status \"4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b\": rpc error: code = NotFound desc = could not find container \"4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b\": container with ID starting with 4f49d2c8acb055331a4e64617a9842022c1ffad7491a39cde2d7029e68cc610b not found: ID does not exist"
	Dec 09 01:58:29 addons-598284 kubelet[1282]: I1209 01:58:29.886501    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e1fc2d-3648-4c3f-9e1a-c90a7da5b030" path="/var/lib/kubelet/pods/38e1fc2d-3648-4c3f-9e1a-c90a7da5b030/volumes"
	Dec 09 01:58:32 addons-598284 kubelet[1282]: I1209 01:58:32.883184    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-nhhw6" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 01:59:14 addons-598284 kubelet[1282]: I1209 01:59:14.883849    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ftp97" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 01:59:34 addons-598284 kubelet[1282]: I1209 01:59:34.883872    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-nhhw6" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 01:59:41 addons-598284 kubelet[1282]: I1209 01:59:41.883861    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-f8kcp" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 02:00:00 addons-598284 kubelet[1282]: I1209 02:00:00.742003    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxs5q\" (UniqueName: \"kubernetes.io/projected/2dc4da31-44f3-4318-9597-015e5c9a496b-kube-api-access-cxs5q\") pod \"hello-world-app-5d498dc89-kwdnj\" (UID: \"2dc4da31-44f3-4318-9597-015e5c9a496b\") " pod="default/hello-world-app-5d498dc89-kwdnj"
	Dec 09 02:00:00 addons-598284 kubelet[1282]: I1209 02:00:00.742089    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2dc4da31-44f3-4318-9597-015e5c9a496b-gcp-creds\") pod \"hello-world-app-5d498dc89-kwdnj\" (UID: \"2dc4da31-44f3-4318-9597-015e5c9a496b\") " pod="default/hello-world-app-5d498dc89-kwdnj"
	
	
	==> storage-provisioner [c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3] <==
	W1209 01:59:37.393414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:39.396825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:39.400702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:41.403813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:41.408629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:43.411209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:43.415097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:45.418398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:45.423241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:47.426074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:47.429296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:49.432204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:49.437065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:51.439472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:51.442626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:53.446664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:53.450039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:55.452273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:55.456351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:57.458478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:57.463297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:59.466254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:59:59.469436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:01.471764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:01.475733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-598284 -n addons-598284
helpers_test.go:269: (dbg) Run:  kubectl --context addons-598284 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-kwdnj ingress-nginx-admission-create-sqg9m ingress-nginx-admission-patch-xg4qv
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-598284 describe pod hello-world-app-5d498dc89-kwdnj ingress-nginx-admission-create-sqg9m ingress-nginx-admission-patch-xg4qv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-598284 describe pod hello-world-app-5d498dc89-kwdnj ingress-nginx-admission-create-sqg9m ingress-nginx-admission-patch-xg4qv: exit status 1 (61.340653ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-kwdnj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-598284/192.168.49.2
	Start Time:       Tue, 09 Dec 2025 02:00:00 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cxs5q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cxs5q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-kwdnj to addons-598284
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sqg9m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xg4qv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-598284 describe pod hello-world-app-5d498dc89-kwdnj ingress-nginx-admission-create-sqg9m ingress-nginx-admission-patch-xg4qv: exit status 1
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (228.884268ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:00:02.960942   30692 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:00:02.961110   30692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:00:02.961121   30692 out.go:374] Setting ErrFile to fd 2...
	I1209 02:00:02.961127   30692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:00:02.961297   30692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:00:02.961547   30692 mustload.go:66] Loading cluster: addons-598284
	I1209 02:00:02.961878   30692 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:00:02.961901   30692 addons.go:622] checking whether the cluster is paused
	I1209 02:00:02.962006   30692 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:00:02.962021   30692 host.go:66] Checking if "addons-598284" exists ...
	I1209 02:00:02.962368   30692 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 02:00:02.979630   30692 ssh_runner.go:195] Run: systemctl --version
	I1209 02:00:02.979992   30692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 02:00:02.997087   30692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 02:00:03.086422   30692 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:00:03.086497   30692 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:00:03.113668   30692 cri.go:89] found id: "6755ec162936c6fcb9a5994d600f7db4c52ffc3449f321c834552bb1ab1c1756"
	I1209 02:00:03.113687   30692 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 02:00:03.113691   30692 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 02:00:03.113695   30692 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 02:00:03.113700   30692 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 02:00:03.113716   30692 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 02:00:03.113723   30692 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 02:00:03.113734   30692 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 02:00:03.113740   30692 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 02:00:03.113749   30692 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 02:00:03.113755   30692 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 02:00:03.113758   30692 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 02:00:03.113761   30692 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 02:00:03.113764   30692 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 02:00:03.113766   30692 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 02:00:03.113772   30692 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 02:00:03.113777   30692 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 02:00:03.113780   30692 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 02:00:03.113783   30692 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 02:00:03.113786   30692 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 02:00:03.113795   30692 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 02:00:03.113803   30692 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 02:00:03.113808   30692 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 02:00:03.113816   30692 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 02:00:03.113820   30692 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 02:00:03.113828   30692 cri.go:89] found id: ""
	I1209 02:00:03.113867   30692 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:00:03.127333   30692 out.go:203] 
	W1209 02:00:03.128399   30692 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:00:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:00:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 02:00:03.128425   30692 out.go:285] * 
	* 
	W1209 02:00:03.131618   30692 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 02:00:03.132873   30692 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable ingress --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable ingress --alsologtostderr -v=1: exit status 11 (230.943428ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:00:03.189895   30755 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:00:03.190185   30755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:00:03.190195   30755 out.go:374] Setting ErrFile to fd 2...
	I1209 02:00:03.190202   30755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:00:03.190417   30755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:00:03.190694   30755 mustload.go:66] Loading cluster: addons-598284
	I1209 02:00:03.190999   30755 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:00:03.191020   30755 addons.go:622] checking whether the cluster is paused
	I1209 02:00:03.191116   30755 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:00:03.191131   30755 host.go:66] Checking if "addons-598284" exists ...
	I1209 02:00:03.191526   30755 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 02:00:03.208767   30755 ssh_runner.go:195] Run: systemctl --version
	I1209 02:00:03.208808   30755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 02:00:03.226020   30755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 02:00:03.316733   30755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:00:03.316828   30755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:00:03.343848   30755 cri.go:89] found id: "6755ec162936c6fcb9a5994d600f7db4c52ffc3449f321c834552bb1ab1c1756"
	I1209 02:00:03.343875   30755 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 02:00:03.343881   30755 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 02:00:03.343884   30755 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 02:00:03.343887   30755 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 02:00:03.343891   30755 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 02:00:03.343894   30755 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 02:00:03.343897   30755 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 02:00:03.343902   30755 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 02:00:03.343915   30755 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 02:00:03.343921   30755 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 02:00:03.343924   30755 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 02:00:03.343926   30755 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 02:00:03.343929   30755 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 02:00:03.343931   30755 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 02:00:03.343939   30755 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 02:00:03.343945   30755 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 02:00:03.343949   30755 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 02:00:03.343952   30755 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 02:00:03.343955   30755 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 02:00:03.343957   30755 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 02:00:03.343960   30755 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 02:00:03.343962   30755 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 02:00:03.343965   30755 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 02:00:03.343967   30755 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 02:00:03.343970   30755 cri.go:89] found id: ""
	I1209 02:00:03.344009   30755 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:00:03.358176   30755 out.go:203] 
	W1209 02:00:03.359204   30755 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:00:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:00:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 02:00:03.359227   30755 out.go:285] * 
	* 
	W1209 02:00:03.362145   30755 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 02:00:03.363464   30755 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:883: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-cwdlx" [10e55bdf-1ada-4c4c-9f64-333d37e1d509] Running
addons_test.go:883: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003263691s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (238.823718ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:38.454466   25274 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:38.454830   25274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:38.454842   25274 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:38.454847   25274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:38.455033   25274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:38.455306   25274 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:38.455630   25274 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:38.455665   25274 addons.go:622] checking whether the cluster is paused
	I1209 01:57:38.455752   25274 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:38.455765   25274 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:38.456143   25274 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:38.474696   25274 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:38.474744   25274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:38.491158   25274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:38.581668   25274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:38.581752   25274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:38.611983   25274 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:38.612005   25274 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:38.612011   25274 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:38.612016   25274 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:38.612021   25274 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:38.612027   25274 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:38.612030   25274 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:38.612033   25274 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:38.612036   25274 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:38.612055   25274 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:38.612064   25274 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:38.612069   25274 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:38.612075   25274 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:38.612080   25274 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:38.612088   25274 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:38.612094   25274 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:38.612102   25274 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:38.612108   25274 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:38.612112   25274 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:38.612117   25274 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:38.612121   25274 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:38.612131   25274 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:38.612138   25274 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:38.612143   25274 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:38.612151   25274 cri.go:89] found id: ""
	I1209 01:57:38.612194   25274 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:38.625576   25274 out.go:203] 
	W1209 01:57:38.626590   25274 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:38.626612   25274 out.go:285] * 
	* 
	W1209 01:57:38.629935   25274 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:38.631016   25274 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:515: metrics-server stabilized in 3.256932ms
addons_test.go:517: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-bzvbq" [9a2defb7-26b5-424b-b49c-b10b47007095] Running
addons_test.go:517: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002057388s
addons_test.go:523: (dbg) Run:  kubectl --context addons-598284 top pods -n kube-system
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (228.972758ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:37.502978   24872 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:37.503127   24872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:37.503136   24872 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:37.503140   24872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:37.503335   24872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:37.503565   24872 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:37.503874   24872 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:37.503892   24872 addons.go:622] checking whether the cluster is paused
	I1209 01:57:37.503970   24872 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:37.503981   24872 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:37.504329   24872 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:37.520977   24872 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:37.521027   24872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:37.537493   24872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:37.627811   24872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:37.627897   24872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:37.655469   24872 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:37.655485   24872 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:37.655490   24872 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:37.655498   24872 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:37.655502   24872 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:37.655505   24872 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:37.655508   24872 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:37.655510   24872 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:37.655513   24872 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:37.655518   24872 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:37.655521   24872 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:37.655523   24872 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:37.655526   24872 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:37.655529   24872 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:37.655531   24872 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:37.655538   24872 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:37.655543   24872 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:37.655558   24872 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:37.655562   24872 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:37.655565   24872 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:37.655570   24872 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:37.655573   24872 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:37.655576   24872 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:37.655578   24872 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:37.655580   24872 cri.go:89] found id: ""
	I1209 01:57:37.655612   24872 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:37.669292   24872 out.go:203] 
	W1209 01:57:37.670430   24872 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:37.670447   24872 out.go:285] * 
	* 
	W1209 01:57:37.673331   24872 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:37.674497   24872 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 01:57:26.158143   14552 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:609: csi-hostpath-driver pods stabilized in 3.743423ms
addons_test.go:612: (dbg) Run:  kubectl --context addons-598284 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-598284 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [96606c5e-02cb-4997-bbc7-23e1a43d4352] Pending
helpers_test.go:352: "task-pv-pod" [96606c5e-02cb-4997-bbc7-23e1a43d4352] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.003532341s
addons_test.go:632: (dbg) Run:  kubectl --context addons-598284 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:637: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-598284 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-598284 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:642: (dbg) Run:  kubectl --context addons-598284 delete pod task-pv-pod
addons_test.go:648: (dbg) Run:  kubectl --context addons-598284 delete pvc hpvc
addons_test.go:654: (dbg) Run:  kubectl --context addons-598284 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:659: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:664: (dbg) Run:  kubectl --context addons-598284 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:669: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [38e1fc2d-3648-4c3f-9e1a-c90a7da5b030] Pending
helpers_test.go:352: "task-pv-pod-restore" [38e1fc2d-3648-4c3f-9e1a-c90a7da5b030] Running
addons_test.go:669: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003866756s
addons_test.go:674: (dbg) Run:  kubectl --context addons-598284 delete pod task-pv-pod-restore
addons_test.go:678: (dbg) Run:  kubectl --context addons-598284 delete pvc hpvc-restore
addons_test.go:682: (dbg) Run:  kubectl --context addons-598284 delete volumesnapshot new-snapshot-demo
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (231.21659ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:58:29.826806   28724 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:58:29.827071   28724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:58:29.827081   28724 out.go:374] Setting ErrFile to fd 2...
	I1209 01:58:29.827085   28724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:58:29.827266   28724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:58:29.827527   28724 mustload.go:66] Loading cluster: addons-598284
	I1209 01:58:29.827851   28724 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:58:29.827870   28724 addons.go:622] checking whether the cluster is paused
	I1209 01:58:29.827957   28724 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:58:29.827968   28724 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:58:29.828327   28724 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:58:29.846126   28724 ssh_runner.go:195] Run: systemctl --version
	I1209 01:58:29.846170   28724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:58:29.861948   28724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:58:29.952738   28724 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:58:29.952813   28724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:58:29.982118   28724 cri.go:89] found id: "6755ec162936c6fcb9a5994d600f7db4c52ffc3449f321c834552bb1ab1c1756"
	I1209 01:58:29.982137   28724 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:58:29.982141   28724 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:58:29.982145   28724 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:58:29.982148   28724 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:58:29.982151   28724 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:58:29.982154   28724 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:58:29.982156   28724 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:58:29.982159   28724 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:58:29.982164   28724 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:58:29.982167   28724 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:58:29.982170   28724 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:58:29.982173   28724 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:58:29.982175   28724 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:58:29.982178   28724 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:58:29.982182   28724 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:58:29.982185   28724 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:58:29.982189   28724 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:58:29.982191   28724 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:58:29.982194   28724 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:58:29.982197   28724 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:58:29.982200   28724 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:58:29.982208   28724 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:58:29.982214   28724 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:58:29.982216   28724 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:58:29.982219   28724 cri.go:89] found id: ""
	I1209 01:58:29.982259   28724 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:58:29.996053   28724 out.go:203] 
	W1209 01:58:29.997267   28724 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:58:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:58:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:58:29.997283   28724 out.go:285] * 
	* 
	W1209 01:58:30.000213   28724 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:58:30.001506   28724 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (227.556701ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:58:30.058287   28787 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:58:30.058415   28787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:58:30.058423   28787 out.go:374] Setting ErrFile to fd 2...
	I1209 01:58:30.058427   28787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:58:30.058602   28787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:58:30.058873   28787 mustload.go:66] Loading cluster: addons-598284
	I1209 01:58:30.059167   28787 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:58:30.059184   28787 addons.go:622] checking whether the cluster is paused
	I1209 01:58:30.059259   28787 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:58:30.059271   28787 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:58:30.059622   28787 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:58:30.076996   28787 ssh_runner.go:195] Run: systemctl --version
	I1209 01:58:30.077035   28787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:58:30.093537   28787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:58:30.183771   28787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:58:30.183861   28787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:58:30.211274   28787 cri.go:89] found id: "6755ec162936c6fcb9a5994d600f7db4c52ffc3449f321c834552bb1ab1c1756"
	I1209 01:58:30.211303   28787 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:58:30.211308   28787 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:58:30.211313   28787 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:58:30.211315   28787 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:58:30.211320   28787 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:58:30.211324   28787 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:58:30.211328   28787 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:58:30.211333   28787 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:58:30.211345   28787 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:58:30.211354   28787 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:58:30.211359   28787 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:58:30.211366   28787 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:58:30.211371   28787 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:58:30.211379   28787 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:58:30.211394   28787 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:58:30.211403   28787 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:58:30.211410   28787 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:58:30.211414   28787 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:58:30.211417   28787 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:58:30.211424   28787 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:58:30.211431   28787 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:58:30.211436   28787 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:58:30.211444   28787 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:58:30.211448   28787 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:58:30.211458   28787 cri.go:89] found id: ""
	I1209 01:58:30.211520   28787 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:58:30.224028   28787 out.go:203] 
	W1209 01:58:30.225156   28787 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:58:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:58:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:58:30.225175   28787 out.go:285] * 
	* 
	W1209 01:58:30.228535   28787 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:58:30.229592   28787 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (64.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:868: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-598284 --alsologtostderr -v=1
addons_test.go:868: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-598284 --alsologtostderr -v=1: exit status 11 (257.801539ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:47.174177   26772 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:47.174461   26772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:47.174472   26772 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:47.174476   26772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:47.174764   26772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:47.175127   26772 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:47.175452   26772 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:47.175473   26772 addons.go:622] checking whether the cluster is paused
	I1209 01:57:47.175555   26772 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:47.175568   26772 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:47.176064   26772 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:47.197842   26772 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:47.197923   26772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:47.218734   26772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:47.313579   26772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:47.313665   26772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:47.344949   26772 cri.go:89] found id: "6755ec162936c6fcb9a5994d600f7db4c52ffc3449f321c834552bb1ab1c1756"
	I1209 01:57:47.344980   26772 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:47.344987   26772 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:47.344992   26772 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:47.344996   26772 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:47.345002   26772 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:47.345006   26772 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:47.345011   26772 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:47.345016   26772 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:47.345038   26772 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:47.345047   26772 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:47.345052   26772 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:47.345056   26772 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:47.345061   26772 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:47.345067   26772 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:47.345083   26772 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:47.345093   26772 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:47.345099   26772 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:47.345103   26772 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:47.345108   26772 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:47.345113   26772 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:47.345118   26772 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:47.345124   26772 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:47.345130   26772 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:47.345138   26772 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:47.345143   26772 cri.go:89] found id: ""
	I1209 01:57:47.345209   26772 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:47.358386   26772 out.go:203] 
	W1209 01:57:47.359520   26772 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:47.359536   26772 out.go:285] * 
	* 
	W1209 01:57:47.362438   26772 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:47.363594   26772 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:870: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-598284 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-598284
helpers_test.go:243: (dbg) docker inspect addons-598284:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a",
	        "Created": "2025-12-09T01:56:07.77688206Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16970,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T01:56:07.805870805Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a/hosts",
	        "LogPath": "/var/lib/docker/containers/af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a/af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a-json.log",
	        "Name": "/addons-598284",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-598284:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-598284",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "af613a4a6c361b5341d93c7ce7c09ace6d7ad88f6776c225efa61fe23aebcb0a",
	                "LowerDir": "/var/lib/docker/overlay2/436809ada0d646a429a9d1bcedf27ddd1f37521b473593791d2a4bdf91725f22-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/436809ada0d646a429a9d1bcedf27ddd1f37521b473593791d2a4bdf91725f22/merged",
	                "UpperDir": "/var/lib/docker/overlay2/436809ada0d646a429a9d1bcedf27ddd1f37521b473593791d2a4bdf91725f22/diff",
	                "WorkDir": "/var/lib/docker/overlay2/436809ada0d646a429a9d1bcedf27ddd1f37521b473593791d2a4bdf91725f22/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-598284",
	                "Source": "/var/lib/docker/volumes/addons-598284/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-598284",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-598284",
	                "name.minikube.sigs.k8s.io": "addons-598284",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8a930a2cd7ef80013d6dbd4ab7fd70855f90b1b360390174f9c9db4402805326",
	            "SandboxKey": "/var/run/docker/netns/8a930a2cd7ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-598284": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0182f7928830a74743f077e940f476da5a02ae5531a91dbf01f6402ec74d0736",
	                    "EndpointID": "d372fdf1933e1b672f77161db45b875da4b8b9d12892ebbf2c4136d97e436db7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "92:24:09:61:65:af",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-598284",
	                        "af613a4a6c36"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-598284 -n addons-598284
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-598284 logs -n 25: (1.070159649s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-983180                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-983180   │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-261314                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-261314   │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-303316                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-303316   │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ --download-only -p download-docker-666539 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-666539 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ -p download-docker-666539                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-666539 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ --download-only -p binary-mirror-013418 --alsologtostderr --binary-mirror http://127.0.0.1:35749 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-013418   │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ -p binary-mirror-013418                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-013418   │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ addons  │ enable dashboard -p addons-598284                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-598284                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ start   │ -p addons-598284 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:57 UTC │
	│ addons  │ addons-598284 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-598284                                                                                                                                                                                                                                                                                                                                                                                           │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │ 09 Dec 25 01:57 UTC │
	│ addons  │ addons-598284 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ ip      │ addons-598284 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │ 09 Dec 25 01:57 UTC │
	│ addons  │ addons-598284 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ addons-598284 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ ssh     │ addons-598284 ssh cat /opt/local-path-provisioner/pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │ 09 Dec 25 01:57 UTC │
	│ addons  │ addons-598284 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ ssh     │ addons-598284 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-598284 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-598284          │ jenkins │ v1.37.0 │ 09 Dec 25 01:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:45.179972   16330 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:45.180095   16330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:45.180105   16330 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:45.180111   16330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:45.180370   16330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:55:45.180830   16330 out.go:368] Setting JSON to false
	I1209 01:55:45.181539   16330 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2294,"bootTime":1765243051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:45.181604   16330 start.go:143] virtualization: kvm guest
	I1209 01:55:45.183258   16330 out.go:179] * [addons-598284] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 01:55:45.184289   16330 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 01:55:45.184319   16330 notify.go:221] Checking for updates...
	I1209 01:55:45.186642   16330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:45.187856   16330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 01:55:45.188826   16330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 01:55:45.189960   16330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 01:55:45.190955   16330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 01:55:45.192154   16330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 01:55:45.212933   16330 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 01:55:45.213048   16330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 01:55:45.262867   16330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-09 01:55:45.254148119 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 01:55:45.262987   16330 docker.go:319] overlay module found
	I1209 01:55:45.264532   16330 out.go:179] * Using the docker driver based on user configuration
	I1209 01:55:45.265685   16330 start.go:309] selected driver: docker
	I1209 01:55:45.265701   16330 start.go:927] validating driver "docker" against <nil>
	I1209 01:55:45.265713   16330 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 01:55:45.266238   16330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 01:55:45.321321   16330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-09 01:55:45.31189074 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 01:55:45.321463   16330 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 01:55:45.321716   16330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 01:55:45.323153   16330 out.go:179] * Using Docker driver with root privileges
	I1209 01:55:45.324212   16330 cni.go:84] Creating CNI manager for ""
	I1209 01:55:45.324262   16330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 01:55:45.324271   16330 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 01:55:45.324320   16330 start.go:353] cluster config:
	{Name:addons-598284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-598284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1209 01:55:45.325537   16330 out.go:179] * Starting "addons-598284" primary control-plane node in "addons-598284" cluster
	I1209 01:55:45.326541   16330 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 01:55:45.327557   16330 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 01:55:45.328492   16330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 01:55:45.328527   16330 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 01:55:45.328536   16330 cache.go:65] Caching tarball of preloaded images
	I1209 01:55:45.328583   16330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 01:55:45.328642   16330 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 01:55:45.328658   16330 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 01:55:45.328975   16330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/config.json ...
	I1209 01:55:45.329005   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/config.json: {Name:mk6a13e76bffff1fe136e5fbf8142f787a177248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:55:45.343761   16330 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c to local cache
	I1209 01:55:45.343860   16330 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory
	I1209 01:55:45.343874   16330 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory, skipping pull
	I1209 01:55:45.343878   16330 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in cache, skipping pull
	I1209 01:55:45.343885   16330 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c as a tarball
	I1209 01:55:45.343892   16330 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c from local cache
	I1209 01:55:57.421936   16330 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c from cached tarball
	I1209 01:55:57.421975   16330 cache.go:243] Successfully downloaded all kic artifacts
	I1209 01:55:57.422027   16330 start.go:360] acquireMachinesLock for addons-598284: {Name:mk44b5bd868e7b7f8b62000352ad95d542ea5dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 01:55:57.422128   16330 start.go:364] duration metric: took 78.237µs to acquireMachinesLock for "addons-598284"
	I1209 01:55:57.422155   16330 start.go:93] Provisioning new machine with config: &{Name:addons-598284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-598284 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 01:55:57.422254   16330 start.go:125] createHost starting for "" (driver="docker")
	I1209 01:55:57.423773   16330 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1209 01:55:57.423984   16330 start.go:159] libmachine.API.Create for "addons-598284" (driver="docker")
	I1209 01:55:57.424019   16330 client.go:173] LocalClient.Create starting
	I1209 01:55:57.424106   16330 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 01:55:57.461025   16330 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 01:55:57.520927   16330 cli_runner.go:164] Run: docker network inspect addons-598284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 01:55:57.537548   16330 cli_runner.go:211] docker network inspect addons-598284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 01:55:57.537609   16330 network_create.go:284] running [docker network inspect addons-598284] to gather additional debugging logs...
	I1209 01:55:57.537631   16330 cli_runner.go:164] Run: docker network inspect addons-598284
	W1209 01:55:57.552763   16330 cli_runner.go:211] docker network inspect addons-598284 returned with exit code 1
	I1209 01:55:57.552789   16330 network_create.go:287] error running [docker network inspect addons-598284]: docker network inspect addons-598284: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-598284 not found
	I1209 01:55:57.552814   16330 network_create.go:289] output of [docker network inspect addons-598284]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-598284 not found
	
	** /stderr **
	I1209 01:55:57.552893   16330 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 01:55:57.569604   16330 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e4ab60}
	I1209 01:55:57.569651   16330 network_create.go:124] attempt to create docker network addons-598284 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 01:55:57.569699   16330 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-598284 addons-598284
	I1209 01:55:57.612792   16330 network_create.go:108] docker network addons-598284 192.168.49.0/24 created
	I1209 01:55:57.612834   16330 kic.go:121] calculated static IP "192.168.49.2" for the "addons-598284" container
	I1209 01:55:57.612899   16330 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 01:55:57.628020   16330 cli_runner.go:164] Run: docker volume create addons-598284 --label name.minikube.sigs.k8s.io=addons-598284 --label created_by.minikube.sigs.k8s.io=true
	I1209 01:55:57.643552   16330 oci.go:103] Successfully created a docker volume addons-598284
	I1209 01:55:57.643651   16330 cli_runner.go:164] Run: docker run --rm --name addons-598284-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-598284 --entrypoint /usr/bin/test -v addons-598284:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 01:56:03.951245   16330 cli_runner.go:217] Completed: docker run --rm --name addons-598284-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-598284 --entrypoint /usr/bin/test -v addons-598284:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib: (6.307538532s)
	I1209 01:56:03.951282   16330 oci.go:107] Successfully prepared a docker volume addons-598284
	I1209 01:56:03.951339   16330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 01:56:03.951351   16330 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 01:56:03.951400   16330 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-598284:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 01:56:07.709675   16330 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-598284:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.758206606s)
	I1209 01:56:07.709704   16330 kic.go:203] duration metric: took 3.758349437s to extract preloaded images to volume ...
	W1209 01:56:07.709785   16330 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 01:56:07.709818   16330 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 01:56:07.709869   16330 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 01:56:07.762156   16330 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-598284 --name addons-598284 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-598284 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-598284 --network addons-598284 --ip 192.168.49.2 --volume addons-598284:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 01:56:08.036330   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Running}}
	I1209 01:56:08.053914   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:08.072812   16330 cli_runner.go:164] Run: docker exec addons-598284 stat /var/lib/dpkg/alternatives/iptables
	I1209 01:56:08.114590   16330 oci.go:144] the created container "addons-598284" has a running status.
	I1209 01:56:08.114615   16330 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa...
	I1209 01:56:08.226334   16330 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 01:56:08.253741   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:08.274089   16330 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 01:56:08.274112   16330 kic_runner.go:114] Args: [docker exec --privileged addons-598284 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 01:56:08.323788   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:08.345424   16330 machine.go:94] provisionDockerMachine start ...
	I1209 01:56:08.345513   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:08.369045   16330 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:08.369361   16330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 01:56:08.369379   16330 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 01:56:08.500248   16330 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-598284
	
	I1209 01:56:08.500281   16330 ubuntu.go:182] provisioning hostname "addons-598284"
	I1209 01:56:08.500350   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:08.518116   16330 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:08.518338   16330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 01:56:08.518355   16330 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-598284 && echo "addons-598284" | sudo tee /etc/hostname
	I1209 01:56:08.652739   16330 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-598284
	
	I1209 01:56:08.652820   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:08.670004   16330 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:08.670301   16330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 01:56:08.670327   16330 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-598284' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-598284/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-598284' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 01:56:08.792355   16330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 01:56:08.792381   16330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 01:56:08.792412   16330 ubuntu.go:190] setting up certificates
	I1209 01:56:08.792428   16330 provision.go:84] configureAuth start
	I1209 01:56:08.792470   16330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-598284
	I1209 01:56:08.808571   16330 provision.go:143] copyHostCerts
	I1209 01:56:08.808656   16330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 01:56:08.808803   16330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 01:56:08.808905   16330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 01:56:08.809007   16330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.addons-598284 san=[127.0.0.1 192.168.49.2 addons-598284 localhost minikube]
	I1209 01:56:08.845401   16330 provision.go:177] copyRemoteCerts
	I1209 01:56:08.845455   16330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 01:56:08.845486   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:08.861265   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:08.951701   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 01:56:08.970096   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 01:56:08.985439   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 01:56:09.000721   16330 provision.go:87] duration metric: took 208.283987ms to configureAuth
	I1209 01:56:09.000746   16330 ubuntu.go:206] setting minikube options for container-runtime
	I1209 01:56:09.000898   16330 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:56:09.000988   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:09.017366   16330 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:09.017628   16330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 01:56:09.017663   16330 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 01:56:09.272420   16330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 01:56:09.272454   16330 machine.go:97] duration metric: took 927.008723ms to provisionDockerMachine
	I1209 01:56:09.272467   16330 client.go:176] duration metric: took 11.848439258s to LocalClient.Create
	I1209 01:56:09.272493   16330 start.go:167] duration metric: took 11.848507122s to libmachine.API.Create "addons-598284"
	I1209 01:56:09.272504   16330 start.go:293] postStartSetup for "addons-598284" (driver="docker")
	I1209 01:56:09.272517   16330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 01:56:09.272596   16330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 01:56:09.272658   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:09.289745   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:09.381544   16330 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 01:56:09.384630   16330 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 01:56:09.384677   16330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 01:56:09.384690   16330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 01:56:09.384745   16330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 01:56:09.384769   16330 start.go:296] duration metric: took 112.258367ms for postStartSetup
	I1209 01:56:09.385028   16330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-598284
	I1209 01:56:09.401292   16330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/config.json ...
	I1209 01:56:09.401529   16330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 01:56:09.401569   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:09.417796   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:09.504872   16330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 01:56:09.508992   16330 start.go:128] duration metric: took 12.086723888s to createHost
	I1209 01:56:09.509017   16330 start.go:83] releasing machines lock for "addons-598284", held for 12.086876985s
	I1209 01:56:09.509082   16330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-598284
	I1209 01:56:09.525776   16330 ssh_runner.go:195] Run: cat /version.json
	I1209 01:56:09.525816   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:09.525855   16330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 01:56:09.525917   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:09.543886   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:09.544684   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:09.702523   16330 ssh_runner.go:195] Run: systemctl --version
	I1209 01:56:09.708484   16330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 01:56:09.740161   16330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 01:56:09.744220   16330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 01:56:09.744287   16330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 01:56:09.766968   16330 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 01:56:09.766987   16330 start.go:496] detecting cgroup driver to use...
	I1209 01:56:09.767012   16330 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 01:56:09.767045   16330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 01:56:09.781250   16330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 01:56:09.791850   16330 docker.go:218] disabling cri-docker service (if available) ...
	I1209 01:56:09.791884   16330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 01:56:09.805918   16330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 01:56:09.821811   16330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 01:56:09.897950   16330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 01:56:09.983124   16330 docker.go:234] disabling docker service ...
	I1209 01:56:09.983181   16330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 01:56:09.999259   16330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 01:56:10.010167   16330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 01:56:10.088160   16330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 01:56:10.159913   16330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 01:56:10.170717   16330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 01:56:10.183246   16330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 01:56:10.183306   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.192052   16330 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 01:56:10.192105   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.199757   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.207279   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.214949   16330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 01:56:10.221958   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.229427   16330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.241263   16330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:10.248859   16330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 01:56:10.255215   16330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 01:56:10.255247   16330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 01:56:10.265623   16330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 01:56:10.272051   16330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:10.349683   16330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 01:56:10.479364   16330 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 01:56:10.479442   16330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 01:56:10.483018   16330 start.go:564] Will wait 60s for crictl version
	I1209 01:56:10.483071   16330 ssh_runner.go:195] Run: which crictl
	I1209 01:56:10.486282   16330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 01:56:10.508801   16330 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 01:56:10.508901   16330 ssh_runner.go:195] Run: crio --version
	I1209 01:56:10.533786   16330 ssh_runner.go:195] Run: crio --version
	I1209 01:56:10.561106   16330 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 01:56:10.562057   16330 cli_runner.go:164] Run: docker network inspect addons-598284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 01:56:10.578373   16330 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 01:56:10.582076   16330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 01:56:10.591429   16330 kubeadm.go:884] updating cluster {Name:addons-598284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-598284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 01:56:10.591538   16330 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 01:56:10.591579   16330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:10.619694   16330 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 01:56:10.619711   16330 crio.go:433] Images already preloaded, skipping extraction
	I1209 01:56:10.619756   16330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:10.642594   16330 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 01:56:10.642612   16330 cache_images.go:86] Images are preloaded, skipping loading
	I1209 01:56:10.642620   16330 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1209 01:56:10.642720   16330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-598284 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-598284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 01:56:10.642777   16330 ssh_runner.go:195] Run: crio config
	I1209 01:56:10.684046   16330 cni.go:84] Creating CNI manager for ""
	I1209 01:56:10.684072   16330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 01:56:10.684091   16330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 01:56:10.684113   16330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-598284 NodeName:addons-598284 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 01:56:10.684252   16330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-598284"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 01:56:10.684319   16330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 01:56:10.691781   16330 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 01:56:10.691833   16330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 01:56:10.698796   16330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 01:56:10.710040   16330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 01:56:10.723372   16330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1209 01:56:10.734476   16330 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 01:56:10.737709   16330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 01:56:10.746516   16330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:10.823268   16330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 01:56:10.843535   16330 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284 for IP: 192.168.49.2
	I1209 01:56:10.843556   16330 certs.go:195] generating shared ca certs ...
	I1209 01:56:10.843575   16330 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:10.843715   16330 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 01:56:10.926367   16330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt ...
	I1209 01:56:10.926391   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt: {Name:mk790d55dd352f1c7ef088b4fa3cda215d478a8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:10.926557   16330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key ...
	I1209 01:56:10.926572   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key: {Name:mk512156b260a50233807f4323f62f483a367ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:10.926692   16330 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 01:56:11.011334   16330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt ...
	I1209 01:56:11.011358   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt: {Name:mka57609f918144b8e592527a59a5a66348a52f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.011518   16330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key ...
	I1209 01:56:11.011534   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key: {Name:mked7769cc0b81a98ffde923610023dc6ee34491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.011646   16330 certs.go:257] generating profile certs ...
	I1209 01:56:11.011707   16330 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.key
	I1209 01:56:11.011721   16330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt with IP's: []
	I1209 01:56:11.136936   16330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt ...
	I1209 01:56:11.136957   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: {Name:mka0584a25dc5e0099dc0467c4404ba608f812b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.137111   16330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.key ...
	I1209 01:56:11.137125   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.key: {Name:mk521c5b80fdbaf18aafa3d3a79f2225fc6dbc13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.137223   16330 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key.64adb407
	I1209 01:56:11.137243   16330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt.64adb407 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1209 01:56:11.387957   16330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt.64adb407 ...
	I1209 01:56:11.387986   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt.64adb407: {Name:mk5c8a6fab4fbb5af4410c999df04ccddbd6ca04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.388167   16330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key.64adb407 ...
	I1209 01:56:11.388184   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key.64adb407: {Name:mk177c432696896647441f42287fc80bec97a241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.388288   16330 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt.64adb407 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt
	I1209 01:56:11.388367   16330 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key.64adb407 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key
	I1209 01:56:11.388415   16330 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.key
	I1209 01:56:11.388442   16330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.crt with IP's: []
	I1209 01:56:11.518312   16330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.crt ...
	I1209 01:56:11.518339   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.crt: {Name:mke190163091ae0514c0215dcaf842dfb9f53535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.518507   16330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.key ...
	I1209 01:56:11.518524   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.key: {Name:mk7ea6183762eae8f6765d6091c424280ff7f088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:11.518735   16330 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 01:56:11.518773   16330 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 01:56:11.518799   16330 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 01:56:11.518830   16330 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 01:56:11.519402   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 01:56:11.536364   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 01:56:11.552192   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 01:56:11.567345   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 01:56:11.582363   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 01:56:11.597642   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 01:56:11.612873   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 01:56:11.628160   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 01:56:11.643241   16330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 01:56:11.660074   16330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 01:56:11.670954   16330 ssh_runner.go:195] Run: openssl version
	I1209 01:56:11.676929   16330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:11.683742   16330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 01:56:11.692939   16330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:11.696487   16330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:11.696536   16330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:11.730090   16330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 01:56:11.736678   16330 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 01:56:11.743309   16330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 01:56:11.746419   16330 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 01:56:11.746466   16330 kubeadm.go:401] StartCluster: {Name:addons-598284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-598284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 01:56:11.746532   16330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:56:11.746570   16330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:56:11.770731   16330 cri.go:89] found id: ""
	I1209 01:56:11.770802   16330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 01:56:11.777684   16330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 01:56:11.784693   16330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 01:56:11.784755   16330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 01:56:11.791450   16330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 01:56:11.791470   16330 kubeadm.go:158] found existing configuration files:
	
	I1209 01:56:11.791503   16330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 01:56:11.798128   16330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 01:56:11.798171   16330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 01:56:11.804500   16330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 01:56:11.810991   16330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 01:56:11.811035   16330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 01:56:11.817450   16330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 01:56:11.824006   16330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 01:56:11.824043   16330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 01:56:11.830294   16330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 01:56:11.836875   16330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 01:56:11.836916   16330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 01:56:11.843283   16330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 01:56:11.894889   16330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1209 01:56:11.945786   16330 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 01:56:20.658736   16330 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 01:56:20.658844   16330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 01:56:20.658988   16330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 01:56:20.659068   16330 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 01:56:20.659113   16330 kubeadm.go:319] OS: Linux
	I1209 01:56:20.659190   16330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 01:56:20.659273   16330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 01:56:20.659352   16330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 01:56:20.659420   16330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 01:56:20.659494   16330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 01:56:20.659571   16330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 01:56:20.659665   16330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 01:56:20.659730   16330 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 01:56:20.659831   16330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 01:56:20.659958   16330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 01:56:20.660047   16330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 01:56:20.660101   16330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 01:56:20.661507   16330 out.go:252]   - Generating certificates and keys ...
	I1209 01:56:20.661577   16330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 01:56:20.661676   16330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 01:56:20.661778   16330 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 01:56:20.661844   16330 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 01:56:20.661898   16330 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 01:56:20.661941   16330 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 01:56:20.662008   16330 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 01:56:20.662186   16330 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-598284 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 01:56:20.662259   16330 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 01:56:20.662417   16330 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-598284 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 01:56:20.662506   16330 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 01:56:20.662599   16330 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 01:56:20.662681   16330 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 01:56:20.662740   16330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 01:56:20.662789   16330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 01:56:20.662871   16330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 01:56:20.662928   16330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 01:56:20.662995   16330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 01:56:20.663075   16330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 01:56:20.663159   16330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 01:56:20.663218   16330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 01:56:20.664270   16330 out.go:252]   - Booting up control plane ...
	I1209 01:56:20.664345   16330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 01:56:20.664423   16330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 01:56:20.664518   16330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 01:56:20.664665   16330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 01:56:20.664787   16330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 01:56:20.664919   16330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 01:56:20.664992   16330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 01:56:20.665030   16330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 01:56:20.665149   16330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 01:56:20.665236   16330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 01:56:20.665288   16330 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001667737s
	I1209 01:56:20.665406   16330 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 01:56:20.665524   16330 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1209 01:56:20.665608   16330 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 01:56:20.665685   16330 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 01:56:20.665749   16330 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.006217248s
	I1209 01:56:20.665810   16330 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.509129045s
	I1209 01:56:20.665867   16330 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001144176s
	I1209 01:56:20.665964   16330 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 01:56:20.666076   16330 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 01:56:20.666129   16330 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 01:56:20.666291   16330 kubeadm.go:319] [mark-control-plane] Marking the node addons-598284 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 01:56:20.666339   16330 kubeadm.go:319] [bootstrap-token] Using token: uk26sz.ajgylvhs2iiq32ld
	I1209 01:56:20.668063   16330 out.go:252]   - Configuring RBAC rules ...
	I1209 01:56:20.668149   16330 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 01:56:20.668236   16330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 01:56:20.668360   16330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 01:56:20.668477   16330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 01:56:20.668584   16330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 01:56:20.668669   16330 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 01:56:20.668771   16330 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 01:56:20.668834   16330 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 01:56:20.668894   16330 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 01:56:20.668900   16330 kubeadm.go:319] 
	I1209 01:56:20.668950   16330 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 01:56:20.668956   16330 kubeadm.go:319] 
	I1209 01:56:20.669024   16330 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 01:56:20.669031   16330 kubeadm.go:319] 
	I1209 01:56:20.669052   16330 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 01:56:20.669119   16330 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 01:56:20.669199   16330 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 01:56:20.669205   16330 kubeadm.go:319] 
	I1209 01:56:20.669279   16330 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 01:56:20.669290   16330 kubeadm.go:319] 
	I1209 01:56:20.669360   16330 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 01:56:20.669373   16330 kubeadm.go:319] 
	I1209 01:56:20.669447   16330 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 01:56:20.669546   16330 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 01:56:20.669618   16330 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 01:56:20.669625   16330 kubeadm.go:319] 
	I1209 01:56:20.669703   16330 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 01:56:20.669768   16330 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 01:56:20.669774   16330 kubeadm.go:319] 
	I1209 01:56:20.669850   16330 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uk26sz.ajgylvhs2iiq32ld \
	I1209 01:56:20.669960   16330 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf \
	I1209 01:56:20.669980   16330 kubeadm.go:319] 	--control-plane 
	I1209 01:56:20.669986   16330 kubeadm.go:319] 
	I1209 01:56:20.670083   16330 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 01:56:20.670095   16330 kubeadm.go:319] 
	I1209 01:56:20.670169   16330 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uk26sz.ajgylvhs2iiq32ld \
	I1209 01:56:20.670272   16330 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf 
	I1209 01:56:20.670282   16330 cni.go:84] Creating CNI manager for ""
	I1209 01:56:20.670288   16330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 01:56:20.671536   16330 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1209 01:56:20.672496   16330 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 01:56:20.676615   16330 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1209 01:56:20.676630   16330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 01:56:20.688683   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 01:56:20.875721   16330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 01:56:20.875777   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:20.875807   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-598284 minikube.k8s.io/updated_at=2025_12_09T01_56_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=addons-598284 minikube.k8s.io/primary=true
	I1209 01:56:20.884569   16330 ops.go:34] apiserver oom_adj: -16
	I1209 01:56:20.949175   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:21.449533   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:21.949605   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:22.449984   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:22.949456   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:23.449671   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:23.949302   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:24.449479   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:24.950207   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:25.449690   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:25.949318   16330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:26.007442   16330 kubeadm.go:1114] duration metric: took 5.131712127s to wait for elevateKubeSystemPrivileges
	I1209 01:56:26.007470   16330 kubeadm.go:403] duration metric: took 14.261006976s to StartCluster
	I1209 01:56:26.007486   16330 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:26.007614   16330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 01:56:26.008033   16330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:26.008211   16330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 01:56:26.008234   16330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 01:56:26.008290   16330 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 01:56:26.008428   16330 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:56:26.008443   16330 addons.go:70] Setting storage-provisioner=true in profile "addons-598284"
	I1209 01:56:26.008451   16330 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-598284"
	I1209 01:56:26.008463   16330 addons.go:239] Setting addon storage-provisioner=true in "addons-598284"
	I1209 01:56:26.008471   16330 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-598284"
	I1209 01:56:26.008430   16330 addons.go:70] Setting yakd=true in profile "addons-598284"
	I1209 01:56:26.008502   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008507   16330 addons.go:239] Setting addon yakd=true in "addons-598284"
	I1209 01:56:26.008515   16330 addons.go:70] Setting ingress=true in profile "addons-598284"
	I1209 01:56:26.008513   16330 addons.go:70] Setting default-storageclass=true in profile "addons-598284"
	I1209 01:56:26.008539   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008544   16330 addons.go:70] Setting metrics-server=true in profile "addons-598284"
	I1209 01:56:26.008560   16330 addons.go:70] Setting volcano=true in profile "addons-598284"
	I1209 01:56:26.008564   16330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-598284"
	I1209 01:56:26.008573   16330 addons.go:70] Setting inspektor-gadget=true in profile "addons-598284"
	I1209 01:56:26.008582   16330 addons.go:70] Setting registry=true in profile "addons-598284"
	I1209 01:56:26.008589   16330 addons.go:239] Setting addon volcano=true in "addons-598284"
	I1209 01:56:26.008596   16330 addons.go:70] Setting volumesnapshots=true in profile "addons-598284"
	I1209 01:56:26.008606   16330 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-598284"
	I1209 01:56:26.008611   16330 addons.go:239] Setting addon volumesnapshots=true in "addons-598284"
	I1209 01:56:26.008547   16330 addons.go:70] Setting registry-creds=true in profile "addons-598284"
	I1209 01:56:26.008629   16330 addons.go:239] Setting addon registry-creds=true in "addons-598284"
	I1209 01:56:26.008441   16330 addons.go:70] Setting ingress-dns=true in profile "addons-598284"
	I1209 01:56:26.008611   16330 addons.go:70] Setting cloud-spanner=true in profile "addons-598284"
	I1209 01:56:26.008661   16330 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-598284"
	I1209 01:56:26.008664   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008667   16330 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-598284"
	I1209 01:56:26.008509   16330 addons.go:70] Setting gcp-auth=true in profile "addons-598284"
	I1209 01:56:26.008682   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008688   16330 mustload.go:66] Loading cluster: addons-598284
	I1209 01:56:26.008658   16330 addons.go:239] Setting addon ingress-dns=true in "addons-598284"
	I1209 01:56:26.008645   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008722   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008838   16330 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:56:26.008933   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.009062   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.009077   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.009129   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.009156   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.008503   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.009189   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.008615   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008587   16330 addons.go:239] Setting addon inspektor-gadget=true in "addons-598284"
	I1209 01:56:26.009768   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008686   16330 addons.go:239] Setting addon cloud-spanner=true in "addons-598284"
	I1209 01:56:26.009801   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008599   16330 addons.go:239] Setting addon registry=true in "addons-598284"
	I1209 01:56:26.009941   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.010280   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.010393   16330 out.go:179] * Verifying Kubernetes components...
	I1209 01:56:26.010404   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.008529   16330 addons.go:239] Setting addon ingress=true in "addons-598284"
	I1209 01:56:26.010484   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008565   16330 addons.go:239] Setting addon metrics-server=true in "addons-598284"
	I1209 01:56:26.010602   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.008686   16330 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-598284"
	I1209 01:56:26.008573   16330 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-598284"
	I1209 01:56:26.010782   16330 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-598284"
	I1209 01:56:26.010803   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.009162   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.011492   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.010289   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.012505   16330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:26.020265   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.021356   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.021849   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.022361   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.022738   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.024376   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.054323   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.055601   16330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 01:56:26.059674   16330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 01:56:26.059695   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 01:56:26.059746   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.073769   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 01:56:26.074783   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 01:56:26.074809   16330 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 01:56:26.074868   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.080734   16330 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1209 01:56:26.081877   16330 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 01:56:26.081926   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1209 01:56:26.082003   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.083037   16330 addons.go:239] Setting addon default-storageclass=true in "addons-598284"
	I1209 01:56:26.085692   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.087170   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.092215   16330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:26.093344   16330 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1209 01:56:26.093373   16330 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1209 01:56:26.093395   16330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1209 01:56:26.095248   16330 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 01:56:26.095271   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1209 01:56:26.095327   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.095461   16330 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 01:56:26.095469   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 01:56:26.095486   16330 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1209 01:56:26.095506   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.097578   16330 out.go:179]   - Using image docker.io/registry:3.0.0
	I1209 01:56:26.097580   16330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:26.098568   16330 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 01:56:26.098584   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 01:56:26.098651   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.101574   16330 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 01:56:26.101598   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 01:56:26.101659   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.105278   16330 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 01:56:26.111727   16330 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 01:56:26.111745   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 01:56:26.111796   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.115436   16330 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1209 01:56:26.117678   16330 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1209 01:56:26.117693   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 01:56:26.117750   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.124438   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 01:56:26.124501   16330 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1209 01:56:26.127601   16330 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 01:56:26.127620   16330 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 01:56:26.127730   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.127829   16330 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 01:56:26.128251   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.128485   16330 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1209 01:56:26.129176   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 01:56:26.134156   16330 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 01:56:26.134178   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1209 01:56:26.134238   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.134541   16330 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 01:56:26.134564   16330 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 01:56:26.134611   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	W1209 01:56:26.136034   16330 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 01:56:26.137294   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 01:56:26.140935   16330 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-598284"
	I1209 01:56:26.140997   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:26.141867   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:26.145163   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 01:56:26.146386   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 01:56:26.150536   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 01:56:26.151618   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 01:56:26.155367   16330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 01:56:26.157439   16330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 01:56:26.159505   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.160183   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 01:56:26.160318   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 01:56:26.160416   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.165059   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.173159   16330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 01:56:26.173203   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.173242   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.173207   16330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 01:56:26.173428   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.173180   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.173163   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.183216   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.186552   16330 out.go:179]   - Using image docker.io/busybox:stable
	I1209 01:56:26.187717   16330 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 01:56:26.188708   16330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 01:56:26.188727   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 01:56:26.188780   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:26.193887   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.198971   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.201560   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.201781   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.209597   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:26.213395   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	W1209 01:56:26.214472   16330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 01:56:26.217712   16330 retry.go:31] will retry after 208.387831ms: ssh: handshake failed: EOF
	W1209 01:56:26.218827   16330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 01:56:26.218850   16330 retry.go:31] will retry after 203.482166ms: ssh: handshake failed: EOF
	I1209 01:56:26.222086   16330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 01:56:26.235984   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	W1209 01:56:26.239506   16330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 01:56:26.239545   16330 retry.go:31] will retry after 235.584956ms: ssh: handshake failed: EOF
	I1209 01:56:26.309229   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 01:56:26.312294   16330 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 01:56:26.312318   16330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 01:56:26.327852   16330 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 01:56:26.327882   16330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 01:56:26.336119   16330 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 01:56:26.336141   16330 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 01:56:26.338055   16330 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 01:56:26.338071   16330 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 01:56:26.340853   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 01:56:26.351530   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 01:56:26.357349   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 01:56:26.358898   16330 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 01:56:26.358916   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 01:56:26.362316   16330 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 01:56:26.362338   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 01:56:26.363848   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 01:56:26.363888   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 01:56:26.364736   16330 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 01:56:26.364753   16330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 01:56:26.365172   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 01:56:26.370061   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 01:56:26.370076   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 01:56:26.375888   16330 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 01:56:26.375906   16330 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 01:56:26.387814   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 01:56:26.390424   16330 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 01:56:26.390506   16330 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 01:56:26.395187   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 01:56:26.395352   16330 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 01:56:26.410355   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 01:56:26.410380   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 01:56:26.427379   16330 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:26.427403   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 01:56:26.442594   16330 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 01:56:26.442616   16330 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 01:56:26.444199   16330 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 01:56:26.444216   16330 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 01:56:26.449960   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 01:56:26.450041   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 01:56:26.458228   16330 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1209 01:56:26.460068   16330 node_ready.go:35] waiting up to 6m0s for node "addons-598284" to be "Ready" ...
	I1209 01:56:26.499462   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:26.500387   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 01:56:26.500463   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 01:56:26.512320   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 01:56:26.515403   16330 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 01:56:26.515478   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 01:56:26.560615   16330 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 01:56:26.560658   16330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 01:56:26.594140   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 01:56:26.628817   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 01:56:26.628841   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 01:56:26.680336   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 01:56:26.685087   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 01:56:26.685171   16330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 01:56:26.686178   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 01:56:26.688541   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 01:56:26.743188   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 01:56:26.743210   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 01:56:26.789401   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 01:56:26.789428   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 01:56:26.855506   16330 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 01:56:26.855531   16330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 01:56:26.929909   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 01:56:26.971438   16330 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-598284" context rescaled to 1 replicas
	I1209 01:56:27.480560   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.115358745s)
	I1209 01:56:27.480608   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.092757545s)
	I1209 01:56:27.480649   16330 addons.go:495] Verifying addon registry=true in "addons-598284"
	I1209 01:56:27.481068   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.117196233s)
	I1209 01:56:27.481090   16330 addons.go:495] Verifying addon ingress=true in "addons-598284"
	I1209 01:56:27.483479   16330 out.go:179] * Verifying ingress addon...
	I1209 01:56:27.483510   16330 out.go:179] * Verifying registry addon...
	I1209 01:56:27.486082   16330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 01:56:27.486246   16330 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 01:56:27.492213   16330 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 01:56:27.492231   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:27.492564   16330 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 01:56:27.492572   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:27.945216   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.445647677s)
	W1209 01:56:27.945279   16330 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 01:56:27.945284   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.432863988s)
	I1209 01:56:27.945303   16330 addons.go:495] Verifying addon metrics-server=true in "addons-598284"
	I1209 01:56:27.945303   16330 retry.go:31] will retry after 331.680871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 01:56:27.945348   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.351066607s)
	I1209 01:56:27.945385   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.265022022s)
	I1209 01:56:27.945436   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.259192618s)
	I1209 01:56:27.945501   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.256900352s)
	I1209 01:56:27.945720   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.015777742s)
	I1209 01:56:27.945744   16330 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-598284"
	I1209 01:56:27.946697   16330 out.go:179] * Verifying csi-hostpath-driver addon...
	I1209 01:56:27.946699   16330 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-598284 service yakd-dashboard -n yakd-dashboard
	
	I1209 01:56:27.948916   16330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 01:56:27.951556   16330 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 01:56:27.951575   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:27.954050   16330 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1209 01:56:28.015694   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:28.015778   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:28.277490   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:28.452793   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:28.464832   16330 node_ready.go:57] node "addons-598284" has "Ready":"False" status (will retry)
	I1209 01:56:28.553687   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:28.553750   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:28.951944   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:28.988598   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:28.988625   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:29.452206   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:29.552496   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:29.552579   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:29.952570   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:30.053198   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:30.053292   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:30.451157   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:30.488335   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:30.488347   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:30.714409   16330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.436873742s)
	I1209 01:56:30.952190   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:30.961700   16330 node_ready.go:57] node "addons-598284" has "Ready":"False" status (will retry)
	I1209 01:56:30.988495   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:30.988680   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:31.452181   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:31.552691   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:31.552841   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:31.951870   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:31.988378   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:31.988601   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:32.452552   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:32.553082   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:32.553259   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:32.952106   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:32.962979   16330 node_ready.go:57] node "addons-598284" has "Ready":"False" status (will retry)
	I1209 01:56:32.988800   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:32.988933   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:33.452165   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:33.553255   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:33.553432   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:33.662389   16330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 01:56:33.662458   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:33.679555   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:33.783717   16330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 01:56:33.795169   16330 addons.go:239] Setting addon gcp-auth=true in "addons-598284"
	I1209 01:56:33.795221   16330 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:56:33.795563   16330 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:56:33.812234   16330 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 01:56:33.812275   16330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:56:33.828064   16330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:56:33.915787   16330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:33.917138   16330 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 01:56:33.918307   16330 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 01:56:33.918323   16330 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 01:56:33.930463   16330 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 01:56:33.930493   16330 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 01:56:33.942027   16330 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 01:56:33.942043   16330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 01:56:33.952588   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:33.953775   16330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 01:56:33.989049   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:33.989065   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:34.228752   16330 addons.go:495] Verifying addon gcp-auth=true in "addons-598284"
	I1209 01:56:34.232786   16330 out.go:179] * Verifying gcp-auth addon...
	I1209 01:56:34.234602   16330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 01:56:34.236556   16330 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 01:56:34.236576   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:34.452579   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:34.488364   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:34.488716   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:34.737932   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:34.952429   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:35.054162   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:35.054218   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:35.237841   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:35.451247   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:35.462226   16330 node_ready.go:57] node "addons-598284" has "Ready":"False" status (will retry)
	I1209 01:56:35.488254   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:35.488419   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:35.737536   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:35.952100   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:35.988918   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:35.989205   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:36.237492   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:36.452086   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:36.489112   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:36.489341   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:36.737806   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:36.952602   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:36.988475   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:36.988760   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:37.237157   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:37.451373   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1209 01:56:37.462264   16330 node_ready.go:57] node "addons-598284" has "Ready":"False" status (will retry)
	I1209 01:56:37.488737   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:37.488749   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:37.736891   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:37.951383   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:37.987936   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:37.988135   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:38.236931   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:38.454881   16330 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 01:56:38.454908   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:38.465471   16330 node_ready.go:49] node "addons-598284" is "Ready"
	I1209 01:56:38.465520   16330 node_ready.go:38] duration metric: took 12.00543085s for node "addons-598284" to be "Ready" ...
	I1209 01:56:38.465542   16330 api_server.go:52] waiting for apiserver process to appear ...
	I1209 01:56:38.465686   16330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 01:56:38.489920   16330 api_server.go:72] duration metric: took 12.481658854s to wait for apiserver process to appear ...
	I1209 01:56:38.489945   16330 api_server.go:88] waiting for apiserver healthz status ...
	I1209 01:56:38.489968   16330 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1209 01:56:38.494680   16330 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1209 01:56:38.495649   16330 api_server.go:141] control plane version: v1.34.2
	I1209 01:56:38.495676   16330 api_server.go:131] duration metric: took 5.723129ms to wait for apiserver health ...
	I1209 01:56:38.495687   16330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 01:56:38.553411   16330 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 01:56:38.553433   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:38.553520   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:38.554366   16330 system_pods.go:59] 20 kube-system pods found
	I1209 01:56:38.554392   16330 system_pods.go:61] "amd-gpu-device-plugin-ftp97" [d071cb4a-2605-4817-9fd3-acecc4c70e72] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1209 01:56:38.554401   16330 system_pods.go:61] "coredns-66bc5c9577-fvxpf" [b39f69ad-d9a2-46a8-b50c-f793e1d8ce3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 01:56:38.554412   16330 system_pods.go:61] "csi-hostpath-attacher-0" [e5e6a133-9661-4816-97ff-0ca906b1abf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:38.554423   16330 system_pods.go:61] "csi-hostpath-resizer-0" [c454d7a6-4150-40c9-a864-83dd3ef8127e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:38.554432   16330 system_pods.go:61] "csi-hostpathplugin-c7mht" [d70f30e5-fb6a-4a6f-9ea3-ec6e64f554eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:38.554440   16330 system_pods.go:61] "etcd-addons-598284" [7bd9cadd-a36f-4c76-a597-16350f86d0f3] Running
	I1209 01:56:38.554449   16330 system_pods.go:61] "kindnet-krjk7" [48bd1b60-eff0-408f-92db-9274638be9f7] Running
	I1209 01:56:38.554452   16330 system_pods.go:61] "kube-apiserver-addons-598284" [03a80f84-a979-4ae7-a2cb-80b13bc270cb] Running
	I1209 01:56:38.554457   16330 system_pods.go:61] "kube-controller-manager-addons-598284" [5ae01528-fe82-42ac-9809-948d47276f79] Running
	I1209 01:56:38.554462   16330 system_pods.go:61] "kube-ingress-dns-minikube" [23f18a2c-e473-4fae-9b0d-0bfcaa13dcd6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:38.554470   16330 system_pods.go:61] "kube-proxy-xb9c9" [8998909f-3f57-450c-8953-06e3c7569b20] Running
	I1209 01:56:38.554473   16330 system_pods.go:61] "kube-scheduler-addons-598284" [f1b03e02-897e-4e86-9e32-8961359623fd] Running
	I1209 01:56:38.554478   16330 system_pods.go:61] "metrics-server-85b7d694d7-bzvbq" [9a2defb7-26b5-424b-b49c-b10b47007095] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:38.554485   16330 system_pods.go:61] "nvidia-device-plugin-daemonset-f8kcp" [e06aa1f3-e53b-4643-93bc-b9cd45f4875e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:38.554503   16330 system_pods.go:61] "registry-6b586f9694-g2qp5" [6aced207-434f-45dc-8005-52d4e7307bea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:38.554511   16330 system_pods.go:61] "registry-creds-764b6fb674-25mz9" [f17f8725-46a5-42b4-b1eb-7d839244e156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:38.554523   16330 system_pods.go:61] "registry-proxy-nhhw6" [a92d010f-222e-4542-8e2a-29b8429da13a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:38.554531   16330 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k5rzs" [fd052a67-9536-476b-9440-a6ca436baa1e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.554544   16330 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qg54s" [3d52e377-2796-417c-9ce8-7394adfc19c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.554552   16330 system_pods.go:61] "storage-provisioner" [1fc26aca-04eb-42b6-8ca4-63ea83f534a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 01:56:38.554560   16330 system_pods.go:74] duration metric: took 58.866538ms to wait for pod list to return data ...
	I1209 01:56:38.554570   16330 default_sa.go:34] waiting for default service account to be created ...
	I1209 01:56:38.556370   16330 default_sa.go:45] found service account: "default"
	I1209 01:56:38.556386   16330 default_sa.go:55] duration metric: took 1.810811ms for default service account to be created ...
	I1209 01:56:38.556394   16330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 01:56:38.559115   16330 system_pods.go:86] 20 kube-system pods found
	I1209 01:56:38.559137   16330 system_pods.go:89] "amd-gpu-device-plugin-ftp97" [d071cb4a-2605-4817-9fd3-acecc4c70e72] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1209 01:56:38.559143   16330 system_pods.go:89] "coredns-66bc5c9577-fvxpf" [b39f69ad-d9a2-46a8-b50c-f793e1d8ce3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 01:56:38.559149   16330 system_pods.go:89] "csi-hostpath-attacher-0" [e5e6a133-9661-4816-97ff-0ca906b1abf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:38.559154   16330 system_pods.go:89] "csi-hostpath-resizer-0" [c454d7a6-4150-40c9-a864-83dd3ef8127e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:38.559160   16330 system_pods.go:89] "csi-hostpathplugin-c7mht" [d70f30e5-fb6a-4a6f-9ea3-ec6e64f554eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:38.559169   16330 system_pods.go:89] "etcd-addons-598284" [7bd9cadd-a36f-4c76-a597-16350f86d0f3] Running
	I1209 01:56:38.559173   16330 system_pods.go:89] "kindnet-krjk7" [48bd1b60-eff0-408f-92db-9274638be9f7] Running
	I1209 01:56:38.559177   16330 system_pods.go:89] "kube-apiserver-addons-598284" [03a80f84-a979-4ae7-a2cb-80b13bc270cb] Running
	I1209 01:56:38.559181   16330 system_pods.go:89] "kube-controller-manager-addons-598284" [5ae01528-fe82-42ac-9809-948d47276f79] Running
	I1209 01:56:38.559185   16330 system_pods.go:89] "kube-ingress-dns-minikube" [23f18a2c-e473-4fae-9b0d-0bfcaa13dcd6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:38.559189   16330 system_pods.go:89] "kube-proxy-xb9c9" [8998909f-3f57-450c-8953-06e3c7569b20] Running
	I1209 01:56:38.559192   16330 system_pods.go:89] "kube-scheduler-addons-598284" [f1b03e02-897e-4e86-9e32-8961359623fd] Running
	I1209 01:56:38.559196   16330 system_pods.go:89] "metrics-server-85b7d694d7-bzvbq" [9a2defb7-26b5-424b-b49c-b10b47007095] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:38.559201   16330 system_pods.go:89] "nvidia-device-plugin-daemonset-f8kcp" [e06aa1f3-e53b-4643-93bc-b9cd45f4875e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:38.559207   16330 system_pods.go:89] "registry-6b586f9694-g2qp5" [6aced207-434f-45dc-8005-52d4e7307bea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:38.559220   16330 system_pods.go:89] "registry-creds-764b6fb674-25mz9" [f17f8725-46a5-42b4-b1eb-7d839244e156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:38.559224   16330 system_pods.go:89] "registry-proxy-nhhw6" [a92d010f-222e-4542-8e2a-29b8429da13a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:38.559229   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rzs" [fd052a67-9536-476b-9440-a6ca436baa1e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.559238   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qg54s" [3d52e377-2796-417c-9ce8-7394adfc19c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.559243   16330 system_pods.go:89] "storage-provisioner" [1fc26aca-04eb-42b6-8ca4-63ea83f534a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 01:56:38.559256   16330 retry.go:31] will retry after 281.636363ms: missing components: kube-dns
	I1209 01:56:38.737837   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:38.845465   16330 system_pods.go:86] 20 kube-system pods found
	I1209 01:56:38.845497   16330 system_pods.go:89] "amd-gpu-device-plugin-ftp97" [d071cb4a-2605-4817-9fd3-acecc4c70e72] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1209 01:56:38.845508   16330 system_pods.go:89] "coredns-66bc5c9577-fvxpf" [b39f69ad-d9a2-46a8-b50c-f793e1d8ce3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 01:56:38.845520   16330 system_pods.go:89] "csi-hostpath-attacher-0" [e5e6a133-9661-4816-97ff-0ca906b1abf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:38.845529   16330 system_pods.go:89] "csi-hostpath-resizer-0" [c454d7a6-4150-40c9-a864-83dd3ef8127e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:38.845537   16330 system_pods.go:89] "csi-hostpathplugin-c7mht" [d70f30e5-fb6a-4a6f-9ea3-ec6e64f554eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:38.845541   16330 system_pods.go:89] "etcd-addons-598284" [7bd9cadd-a36f-4c76-a597-16350f86d0f3] Running
	I1209 01:56:38.845546   16330 system_pods.go:89] "kindnet-krjk7" [48bd1b60-eff0-408f-92db-9274638be9f7] Running
	I1209 01:56:38.845553   16330 system_pods.go:89] "kube-apiserver-addons-598284" [03a80f84-a979-4ae7-a2cb-80b13bc270cb] Running
	I1209 01:56:38.845557   16330 system_pods.go:89] "kube-controller-manager-addons-598284" [5ae01528-fe82-42ac-9809-948d47276f79] Running
	I1209 01:56:38.845564   16330 system_pods.go:89] "kube-ingress-dns-minikube" [23f18a2c-e473-4fae-9b0d-0bfcaa13dcd6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:38.845568   16330 system_pods.go:89] "kube-proxy-xb9c9" [8998909f-3f57-450c-8953-06e3c7569b20] Running
	I1209 01:56:38.845574   16330 system_pods.go:89] "kube-scheduler-addons-598284" [f1b03e02-897e-4e86-9e32-8961359623fd] Running
	I1209 01:56:38.845580   16330 system_pods.go:89] "metrics-server-85b7d694d7-bzvbq" [9a2defb7-26b5-424b-b49c-b10b47007095] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:38.845586   16330 system_pods.go:89] "nvidia-device-plugin-daemonset-f8kcp" [e06aa1f3-e53b-4643-93bc-b9cd45f4875e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:38.845595   16330 system_pods.go:89] "registry-6b586f9694-g2qp5" [6aced207-434f-45dc-8005-52d4e7307bea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:38.845604   16330 system_pods.go:89] "registry-creds-764b6fb674-25mz9" [f17f8725-46a5-42b4-b1eb-7d839244e156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:38.845613   16330 system_pods.go:89] "registry-proxy-nhhw6" [a92d010f-222e-4542-8e2a-29b8429da13a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:38.845621   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rzs" [fd052a67-9536-476b-9440-a6ca436baa1e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.845657   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qg54s" [3d52e377-2796-417c-9ce8-7394adfc19c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:38.845668   16330 system_pods.go:89] "storage-provisioner" [1fc26aca-04eb-42b6-8ca4-63ea83f534a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 01:56:38.845686   16330 retry.go:31] will retry after 343.536778ms: missing components: kube-dns
	I1209 01:56:38.953096   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:38.989664   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:38.989963   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:39.194986   16330 system_pods.go:86] 20 kube-system pods found
	I1209 01:56:39.195024   16330 system_pods.go:89] "amd-gpu-device-plugin-ftp97" [d071cb4a-2605-4817-9fd3-acecc4c70e72] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1209 01:56:39.195032   16330 system_pods.go:89] "coredns-66bc5c9577-fvxpf" [b39f69ad-d9a2-46a8-b50c-f793e1d8ce3b] Running
	I1209 01:56:39.195050   16330 system_pods.go:89] "csi-hostpath-attacher-0" [e5e6a133-9661-4816-97ff-0ca906b1abf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:39.195059   16330 system_pods.go:89] "csi-hostpath-resizer-0" [c454d7a6-4150-40c9-a864-83dd3ef8127e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:39.195073   16330 system_pods.go:89] "csi-hostpathplugin-c7mht" [d70f30e5-fb6a-4a6f-9ea3-ec6e64f554eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:39.195081   16330 system_pods.go:89] "etcd-addons-598284" [7bd9cadd-a36f-4c76-a597-16350f86d0f3] Running
	I1209 01:56:39.195088   16330 system_pods.go:89] "kindnet-krjk7" [48bd1b60-eff0-408f-92db-9274638be9f7] Running
	I1209 01:56:39.195104   16330 system_pods.go:89] "kube-apiserver-addons-598284" [03a80f84-a979-4ae7-a2cb-80b13bc270cb] Running
	I1209 01:56:39.195110   16330 system_pods.go:89] "kube-controller-manager-addons-598284" [5ae01528-fe82-42ac-9809-948d47276f79] Running
	I1209 01:56:39.195119   16330 system_pods.go:89] "kube-ingress-dns-minikube" [23f18a2c-e473-4fae-9b0d-0bfcaa13dcd6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:39.195124   16330 system_pods.go:89] "kube-proxy-xb9c9" [8998909f-3f57-450c-8953-06e3c7569b20] Running
	I1209 01:56:39.195130   16330 system_pods.go:89] "kube-scheduler-addons-598284" [f1b03e02-897e-4e86-9e32-8961359623fd] Running
	I1209 01:56:39.195137   16330 system_pods.go:89] "metrics-server-85b7d694d7-bzvbq" [9a2defb7-26b5-424b-b49c-b10b47007095] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:39.195152   16330 system_pods.go:89] "nvidia-device-plugin-daemonset-f8kcp" [e06aa1f3-e53b-4643-93bc-b9cd45f4875e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:39.195159   16330 system_pods.go:89] "registry-6b586f9694-g2qp5" [6aced207-434f-45dc-8005-52d4e7307bea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:39.195167   16330 system_pods.go:89] "registry-creds-764b6fb674-25mz9" [f17f8725-46a5-42b4-b1eb-7d839244e156] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:39.195174   16330 system_pods.go:89] "registry-proxy-nhhw6" [a92d010f-222e-4542-8e2a-29b8429da13a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:39.195181   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rzs" [fd052a67-9536-476b-9440-a6ca436baa1e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:39.195201   16330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qg54s" [3d52e377-2796-417c-9ce8-7394adfc19c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:39.195212   16330 system_pods.go:89] "storage-provisioner" [1fc26aca-04eb-42b6-8ca4-63ea83f534a5] Running
	I1209 01:56:39.195223   16330 system_pods.go:126] duration metric: took 638.823026ms to wait for k8s-apps to be running ...
	I1209 01:56:39.195233   16330 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 01:56:39.195288   16330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 01:56:39.270735   16330 system_svc.go:56] duration metric: took 75.492081ms WaitForService to wait for kubelet
	I1209 01:56:39.270770   16330 kubeadm.go:587] duration metric: took 13.262513608s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 01:56:39.270794   16330 node_conditions.go:102] verifying NodePressure condition ...
	I1209 01:56:39.275784   16330 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 01:56:39.275812   16330 node_conditions.go:123] node cpu capacity is 8
	I1209 01:56:39.275829   16330 node_conditions.go:105] duration metric: took 5.028338ms to run NodePressure ...
	I1209 01:56:39.275842   16330 start.go:242] waiting for startup goroutines ...
	I1209 01:56:39.294467   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:39.453297   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:39.490760   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:39.491070   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:39.738465   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:39.953598   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:39.989160   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:39.989233   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:40.238387   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:40.452838   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:40.489192   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:40.489227   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:40.738461   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:40.952678   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:40.989532   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:40.989703   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:41.238143   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:41.452292   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:41.489735   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:41.489794   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:41.737664   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:41.953050   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:41.989688   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:41.989733   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:42.237319   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:42.452428   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:42.488727   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:42.488897   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:42.738090   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:42.951558   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:42.988677   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:42.988761   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:43.236831   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:43.451828   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:43.488963   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:43.489009   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:43.739347   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:43.954203   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:43.990499   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:43.991503   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:44.238531   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:44.452957   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:44.489579   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:44.489619   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:44.737897   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:44.952158   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:44.989851   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:44.989856   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:45.237446   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:45.452453   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:45.488779   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:45.488815   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:45.737070   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:45.951952   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:45.989573   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:45.989776   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:46.237962   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:46.452341   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:46.488681   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:46.488765   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:46.737795   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:46.952727   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:46.995441   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:46.995466   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:47.237990   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:47.452234   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:47.488253   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:47.488392   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:47.738092   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:47.952767   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:47.989557   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:47.989989   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:48.237974   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:48.452596   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:48.489052   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:48.489161   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:48.738119   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:48.952197   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:48.989850   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:48.989883   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:49.237786   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:49.452437   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:49.488285   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:49.488356   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:49.737906   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:49.952512   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:49.989219   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:49.989329   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:50.238499   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:50.452357   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:50.553005   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:50.553136   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:50.738239   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:50.952557   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:50.989085   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:50.989256   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:51.238002   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:51.451865   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:51.488731   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:51.488861   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:51.737405   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:51.952591   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:51.989104   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:51.989130   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:52.237603   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:52.452325   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:52.488174   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:52.488191   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:52.738127   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:52.951907   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:52.989621   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:52.989830   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:53.236884   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:53.451838   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:53.488827   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:53.488855   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:53.737602   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:53.952325   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:53.988372   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:53.988419   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:54.238038   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:54.451993   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:54.488806   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:54.488932   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:54.738161   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:54.952322   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:54.989580   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:54.989580   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.312532   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:55.452729   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:55.489654   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.489730   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:55.737169   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:55.952294   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:55.988840   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.988847   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:56.238155   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:56.452363   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:56.552388   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:56.552444   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:56.737660   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:56.952424   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:56.988414   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:56.988482   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:57.237938   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:57.452451   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:57.488862   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:57.488904   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:57.737870   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:57.952590   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:57.989291   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:57.989727   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:58.237795   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:58.452705   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:58.488571   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:58.488712   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:58.737837   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:58.951491   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:58.988555   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:58.988648   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:59.237335   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:59.452721   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:59.490168   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:59.490210   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:59.738552   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:59.952420   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:59.989011   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:59.989056   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:00.237886   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:00.451796   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:00.488898   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:00.488943   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:00.737973   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:00.952059   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:00.989601   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:00.989618   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:01.237206   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:01.451972   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:01.488761   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:01.488773   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:01.737216   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:01.952358   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:01.988899   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:01.989077   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:02.237117   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:02.451735   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:02.488816   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:02.488843   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:02.738267   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:02.952463   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:02.988862   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:02.988925   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:03.238222   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:03.452317   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:03.552543   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:03.552764   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:03.737393   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:03.952363   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:03.990790   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:03.991156   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:04.239666   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:04.452975   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:04.489335   16330 kapi.go:107] duration metric: took 37.003247964s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 01:57:04.489596   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:04.738356   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:04.953339   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:04.990041   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:05.283366   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:05.452693   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:05.489114   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:05.737724   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:05.952624   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:05.988596   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:06.237054   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:06.452271   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:06.490435   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:06.738092   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:06.952327   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:06.989781   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:07.237619   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:07.455379   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:07.490544   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:07.738022   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:07.952623   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:07.989415   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:08.237945   16330 kapi.go:107] duration metric: took 34.003339464s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 01:57:08.239358   16330 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-598284 cluster.
	I1209 01:57:08.240451   16330 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 01:57:08.241547   16330 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 01:57:08.452116   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:08.578926   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:08.953484   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:08.988951   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:09.452596   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:09.488969   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:09.952475   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:09.990027   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:10.452409   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:10.489538   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:10.952027   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:10.989046   16330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:11.452239   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:11.552916   16330 kapi.go:107] duration metric: took 44.066665843s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 01:57:11.952720   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:12.452738   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:12.952601   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:13.452518   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:13.952117   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:14.453167   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:14.952492   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:15.452764   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:15.951978   16330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:16.451966   16330 kapi.go:107] duration metric: took 48.503047854s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 01:57:16.453544   16330 out.go:179] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, inspektor-gadget, nvidia-device-plugin, ingress-dns, registry-creds, metrics-server, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1209 01:57:16.454689   16330 addons.go:530] duration metric: took 50.446402988s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin inspektor-gadget nvidia-device-plugin ingress-dns registry-creds metrics-server cloud-spanner yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1209 01:57:16.454722   16330 start.go:247] waiting for cluster config update ...
	I1209 01:57:16.454740   16330 start.go:256] writing updated cluster config ...
	I1209 01:57:16.454975   16330 ssh_runner.go:195] Run: rm -f paused
	I1209 01:57:16.458791   16330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 01:57:16.461371   16330 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fvxpf" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.464926   16330 pod_ready.go:94] pod "coredns-66bc5c9577-fvxpf" is "Ready"
	I1209 01:57:16.464947   16330 pod_ready.go:86] duration metric: took 3.557534ms for pod "coredns-66bc5c9577-fvxpf" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.466623   16330 pod_ready.go:83] waiting for pod "etcd-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.469560   16330 pod_ready.go:94] pod "etcd-addons-598284" is "Ready"
	I1209 01:57:16.469581   16330 pod_ready.go:86] duration metric: took 2.92871ms for pod "etcd-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.471088   16330 pod_ready.go:83] waiting for pod "kube-apiserver-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.473980   16330 pod_ready.go:94] pod "kube-apiserver-addons-598284" is "Ready"
	I1209 01:57:16.473998   16330 pod_ready.go:86] duration metric: took 2.89148ms for pod "kube-apiserver-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.475443   16330 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:16.862383   16330 pod_ready.go:94] pod "kube-controller-manager-addons-598284" is "Ready"
	I1209 01:57:16.862408   16330 pod_ready.go:86] duration metric: took 386.947982ms for pod "kube-controller-manager-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:17.062164   16330 pod_ready.go:83] waiting for pod "kube-proxy-xb9c9" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:17.463493   16330 pod_ready.go:94] pod "kube-proxy-xb9c9" is "Ready"
	I1209 01:57:17.463520   16330 pod_ready.go:86] duration metric: took 401.33333ms for pod "kube-proxy-xb9c9" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:17.662244   16330 pod_ready.go:83] waiting for pod "kube-scheduler-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:18.061837   16330 pod_ready.go:94] pod "kube-scheduler-addons-598284" is "Ready"
	I1209 01:57:18.061861   16330 pod_ready.go:86] duration metric: took 399.594577ms for pod "kube-scheduler-addons-598284" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:57:18.061872   16330 pod_ready.go:40] duration metric: took 1.603058936s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 01:57:18.104702   16330 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 01:57:18.106558   16330 out.go:179] * Done! kubectl is now configured to use "addons-598284" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 01:57:46 addons-598284 crio[769]: time="2025-12-09T01:57:46.267105491Z" level=info msg="Stopping pod sandbox: 20e162edd0dd3d664735252ae1a79334e051e44b6180642c711a3e282a955d9a" id=ec9f58a7-fd2a-4648-8cde-6f0854f47bd2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 01:57:46 addons-598284 crio[769]: time="2025-12-09T01:57:46.267348413Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:20e162edd0dd3d664735252ae1a79334e051e44b6180642c711a3e282a955d9a UID:f41b0822-c45b-4395-8051-853eb9a84b69 NetNS:/var/run/netns/7c2134ed-a267-452d-bad0-1470d1b0889b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00040e738}] Aliases:map[]}"
	Dec 09 01:57:46 addons-598284 crio[769]: time="2025-12-09T01:57:46.267463818Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Dec 09 01:57:46 addons-598284 crio[769]: time="2025-12-09T01:57:46.29311156Z" level=info msg="Stopped pod sandbox: 20e162edd0dd3d664735252ae1a79334e051e44b6180642c711a3e282a955d9a" id=ec9f58a7-fd2a-4648-8cde-6f0854f47bd2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.097400028Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8/POD" id=fab98d35-688c-4953-82a9-3aba83765fdb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.097476557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.105968698Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8 Namespace:local-path-storage ID:51cb2255177f4af8b5e5bec863a77ce0ee07e7e57c479b20420817c4cb274b5c UID:a797f874-fbbb-4348-a6b2-798f327ef870 NetNS:/var/run/netns/c4b6e058-9dca-4e03-a737-dda400c18af3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00061e828}] Aliases:map[]}"
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.105996416Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8 to CNI network \"kindnet\" (type=ptp)"
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.117453031Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8 Namespace:local-path-storage ID:51cb2255177f4af8b5e5bec863a77ce0ee07e7e57c479b20420817c4cb274b5c UID:a797f874-fbbb-4348-a6b2-798f327ef870 NetNS:/var/run/netns/c4b6e058-9dca-4e03-a737-dda400c18af3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00061e828}] Aliases:map[]}"
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.117609626Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8 for CNI network kindnet (type=ptp)"
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.118947418Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.12012855Z" level=info msg="Ran pod sandbox 51cb2255177f4af8b5e5bec863a77ce0ee07e7e57c479b20420817c4cb274b5c with infra container: local-path-storage/helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8/POD" id=fab98d35-688c-4953-82a9-3aba83765fdb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.121445938Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=b60a426c-793d-4391-897b-2ce1f9552c3c name=/runtime.v1.ImageService/ImageStatus
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.123333651Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=85aa51f8-1147-4b65-8c0e-8821cbb988ac name=/runtime.v1.ImageService/ImageStatus
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.127800133Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8/helper-pod" id=e297cbd9-8d29-4356-bf91-cf7789434288 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.127953775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.136171192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.136809275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.17399656Z" level=info msg="Created container 2c0478373c8d3167acfde650e51f707dd5aadd7f61ac120584c379782a152989: local-path-storage/helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8/helper-pod" id=e297cbd9-8d29-4356-bf91-cf7789434288 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.174698517Z" level=info msg="Starting container: 2c0478373c8d3167acfde650e51f707dd5aadd7f61ac120584c379782a152989" id=c354b1e6-174d-4f69-80b9-7cfb4d775395 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 01:57:47 addons-598284 crio[769]: time="2025-12-09T01:57:47.176705086Z" level=info msg="Started container" PID=7685 containerID=2c0478373c8d3167acfde650e51f707dd5aadd7f61ac120584c379782a152989 description=local-path-storage/helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8/helper-pod id=c354b1e6-174d-4f69-80b9-7cfb4d775395 name=/runtime.v1.RuntimeService/StartContainer sandboxID=51cb2255177f4af8b5e5bec863a77ce0ee07e7e57c479b20420817c4cb274b5c
	Dec 09 01:57:48 addons-598284 crio[769]: time="2025-12-09T01:57:48.279701269Z" level=info msg="Stopping pod sandbox: 51cb2255177f4af8b5e5bec863a77ce0ee07e7e57c479b20420817c4cb274b5c" id=8014cdd9-9379-48a5-8449-4f8381560a1e name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 01:57:48 addons-598284 crio[769]: time="2025-12-09T01:57:48.279996438Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8 Namespace:local-path-storage ID:51cb2255177f4af8b5e5bec863a77ce0ee07e7e57c479b20420817c4cb274b5c UID:a797f874-fbbb-4348-a6b2-798f327ef870 NetNS:/var/run/netns/c4b6e058-9dca-4e03-a737-dda400c18af3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003828e8}] Aliases:map[]}"
	Dec 09 01:57:48 addons-598284 crio[769]: time="2025-12-09T01:57:48.280155835Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8 from CNI network \"kindnet\" (type=ptp)"
	Dec 09 01:57:48 addons-598284 crio[769]: time="2025-12-09T01:57:48.300135026Z" level=info msg="Stopped pod sandbox: 51cb2255177f4af8b5e5bec863a77ce0ee07e7e57c479b20420817c4cb274b5c" id=8014cdd9-9379-48a5-8449-4f8381560a1e name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	2c0478373c8d3       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             1 second ago         Exited              helper-pod                               0                   51cb2255177f4       helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8   local-path-storage
	277cf84c0e512       docker.io/library/busybox@sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737                                            3 seconds ago        Exited              busybox                                  0                   20e162edd0dd3       test-local-path                                              default
	6755ec162936c       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             5 seconds ago        Running             registry-creds                           0                   1b1805f9d40d8       registry-creds-764b6fb674-25mz9                              kube-system
	cf051c8885632       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            7 seconds ago        Exited              helper-pod                               0                   07fc9abc6ded1       helper-pod-create-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8   local-path-storage
	24a0e8d0103ac       public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9                                           8 seconds ago        Running             nginx                                    0                   524e835ef307f       nginx                                                        default
	66e46ba7618c8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          28 seconds ago       Running             busybox                                  0                   3ec98b67aa6dd       busybox                                                      default
	0cf8359e032c5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          33 seconds ago       Running             csi-snapshotter                          0                   bdb44500bcec8       csi-hostpathplugin-c7mht                                     kube-system
	09ee3d53e0739       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          34 seconds ago       Running             csi-provisioner                          0                   bdb44500bcec8       csi-hostpathplugin-c7mht                                     kube-system
	07327f304dd6a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            35 seconds ago       Running             liveness-probe                           0                   bdb44500bcec8       csi-hostpathplugin-c7mht                                     kube-system
	cd4e7d6b980f0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           35 seconds ago       Running             hostpath                                 0                   bdb44500bcec8       csi-hostpathplugin-c7mht                                     kube-system
	d32117de7c58e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                36 seconds ago       Running             node-driver-registrar                    0                   bdb44500bcec8       csi-hostpathplugin-c7mht                                     kube-system
	165b4491c3f96       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             37 seconds ago       Running             controller                               0                   3f66552b6789e       ingress-nginx-controller-85d4c799dd-rkcmx                    ingress-nginx
	ca4804ab9c48e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 41 seconds ago       Running             gcp-auth                                 0                   f2c049d533745       gcp-auth-78565c9fb4-sg5ff                                    gcp-auth
	79d771e63df5c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            42 seconds ago       Running             gadget                                   0                   a5e553c981796       gadget-cwdlx                                                 gadget
	a22b2817d5b76       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              44 seconds ago       Running             registry-proxy                           0                   799829e163f51       registry-proxy-nhhw6                                         kube-system
	9259d8cba23be       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     46 seconds ago       Running             amd-gpu-device-plugin                    0                   5a34b34bb6e0d       amd-gpu-device-plugin-ftp97                                  kube-system
	58565aa6aebcd       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     47 seconds ago       Running             nvidia-device-plugin-ctr                 0                   741b0f505f2bb       nvidia-device-plugin-daemonset-f8kcp                         kube-system
	258a6b06d27dc       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           50 seconds ago       Running             registry                                 0                   6f17461cfb8ee       registry-6b586f9694-g2qp5                                    kube-system
	4f60883937b8b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   51 seconds ago       Running             csi-external-health-monitor-controller   0                   bdb44500bcec8       csi-hostpathplugin-c7mht                                     kube-system
	99770ac31d147       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              52 seconds ago       Running             csi-resizer                              0                   394070b11927c       csi-hostpath-resizer-0                                       kube-system
	6ad8e94399619       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   54 seconds ago       Exited              patch                                    0                   c81998943be09       ingress-nginx-admission-patch-xg4qv                          ingress-nginx
	a8861dac6b035       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      54 seconds ago       Running             volume-snapshot-controller               0                   c5fa041bea91d       snapshot-controller-7d9fbc56b8-qg54s                         kube-system
	063fd46945671       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   54 seconds ago       Exited              patch                                    0                   dd0c8430cab6d       gcp-auth-certs-patch-splg5                                   gcp-auth
	9dd4a9e3e2bac       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   55 seconds ago       Exited              create                                   0                   83b00c50208dd       gcp-auth-certs-create-mxv9j                                  gcp-auth
	ebb1b92540ca1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             55 seconds ago       Running             local-path-provisioner                   0                   7ade1a0238a7d       local-path-provisioner-648f6765c9-r5jbl                      local-path-storage
	c222dc3a3f279       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      56 seconds ago       Running             volume-snapshot-controller               0                   7acf650818914       snapshot-controller-7d9fbc56b8-k5rzs                         kube-system
	7a1b1e01077e4       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             57 seconds ago       Running             csi-attacher                             0                   053772d238c2b       csi-hostpath-attacher-0                                      kube-system
	60c37a2cb1cdc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   57 seconds ago       Exited              create                                   0                   c7c6d2d419c1e       ingress-nginx-admission-create-sqg9m                         ingress-nginx
	e745cbc0143b0       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              58 seconds ago       Running             yakd                                     0                   6a4e9342b846a       yakd-dashboard-5ff678cb9-kgfgf                               yakd-dashboard
	69b827fe1bc6e       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   240db2311777a       metrics-server-85b7d694d7-bzvbq                              kube-system
	acdbe88ce48d7       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               About a minute ago   Running             cloud-spanner-emulator                   0                   52fedd4d42bdf       cloud-spanner-emulator-5bdddb765-tqgf8                       default
	af1bbcbd5b2e7       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   8094382ecbf41       kube-ingress-dns-minikube                                    kube-system
	2c82ba2d18c01       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   9d7aff926c80b       coredns-66bc5c9577-fvxpf                                     kube-system
	c21d5137f49f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   243f2fe79eb5a       storage-provisioner                                          kube-system
	ea6bd4352d85a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   ae8f3e789172f       kindnet-krjk7                                                kube-system
	c951a1040b335       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   ad04bcae3ea80       kube-proxy-xb9c9                                             kube-system
	40e0aceab5999       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   72a153d634b9a       etcd-addons-598284                                           kube-system
	49c6272ba70f5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   6fbc6f9099273       kube-controller-manager-addons-598284                        kube-system
	16e2e43c2d88b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   59f4b03e8372b       kube-apiserver-addons-598284                                 kube-system
	b5bddb335ebc6       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   4eefecc11ae00       kube-scheduler-addons-598284                                 kube-system
	
	
	==> coredns [2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7] <==
	[INFO] 10.244.0.22:43057 - 7051 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000204938s
	[INFO] 10.244.0.22:38292 - 29605 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00477914s
	[INFO] 10.244.0.22:54098 - 27242 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007710524s
	[INFO] 10.244.0.22:59616 - 35348 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004680096s
	[INFO] 10.244.0.22:59771 - 44539 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005136327s
	[INFO] 10.244.0.22:44524 - 44485 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005144146s
	[INFO] 10.244.0.22:49118 - 19164 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005423151s
	[INFO] 10.244.0.22:41416 - 7458 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001099238s
	[INFO] 10.244.0.22:38737 - 27293 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001478006s
	[INFO] 10.244.0.24:33122 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000166486s
	[INFO] 10.244.0.24:43231 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000181565s
	[INFO] 10.244.0.27:59010 - 61812 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000204467s
	[INFO] 10.244.0.27:45642 - 17494 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000276485s
	[INFO] 10.244.0.27:56028 - 28173 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000113807s
	[INFO] 10.244.0.27:50361 - 58261 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000178461s
	[INFO] 10.244.0.27:35866 - 59692 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00010443s
	[INFO] 10.244.0.27:35995 - 48696 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000107362s
	[INFO] 10.244.0.27:46512 - 43215 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004710866s
	[INFO] 10.244.0.27:40268 - 44651 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004850175s
	[INFO] 10.244.0.27:34827 - 19453 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.00398877s
	[INFO] 10.244.0.27:44717 - 35325 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004126066s
	[INFO] 10.244.0.27:57367 - 8187 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.003528663s
	[INFO] 10.244.0.27:60669 - 62552 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004470312s
	[INFO] 10.244.0.27:38737 - 6843 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001668657s
	[INFO] 10.244.0.27:39463 - 7871 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001812373s
	
	
	==> describe nodes <==
	Name:               addons-598284
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-598284
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=addons-598284
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T01_56_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-598284
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-598284"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 01:56:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-598284
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 01:57:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 01:57:21 +0000   Tue, 09 Dec 2025 01:56:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 01:57:21 +0000   Tue, 09 Dec 2025 01:56:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 01:57:21 +0000   Tue, 09 Dec 2025 01:56:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 01:57:21 +0000   Tue, 09 Dec 2025 01:56:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-598284
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                418097e5-e43f-4ca7-be60-ac2cb9fae4ef
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  default                     cloud-spanner-emulator-5bdddb765-tqgf8       0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  gadget                      gadget-cwdlx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  gcp-auth                    gcp-auth-78565c9fb4-sg5ff                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-rkcmx    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         81s
	  kube-system                 amd-gpu-device-plugin-ftp97                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 coredns-66bc5c9577-fvxpf                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     83s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 csi-hostpathplugin-c7mht                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 etcd-addons-598284                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         89s
	  kube-system                 kindnet-krjk7                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      84s
	  kube-system                 kube-apiserver-addons-598284                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-addons-598284        200m (2%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-xb9c9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-addons-598284                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 metrics-server-85b7d694d7-bzvbq              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         81s
	  kube-system                 nvidia-device-plugin-daemonset-f8kcp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 registry-6b586f9694-g2qp5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 registry-creds-764b6fb674-25mz9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 registry-proxy-nhhw6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 snapshot-controller-7d9fbc56b8-k5rzs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 snapshot-controller-7d9fbc56b8-qg54s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  local-path-storage          local-path-provisioner-648f6765c9-r5jbl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-kgfgf               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 81s                kube-proxy       
	  Normal  NodeHasSufficientMemory  92s (x8 over 93s)  kubelet          Node addons-598284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s (x8 over 93s)  kubelet          Node addons-598284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s (x8 over 93s)  kubelet          Node addons-598284 status is now: NodeHasSufficientPID
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node addons-598284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node addons-598284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                kubelet          Node addons-598284 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           84s                node-controller  Node addons-598284 event: Registered Node addons-598284 in Controller
	  Normal  NodeReady                70s                kubelet          Node addons-598284 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 01:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001882] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.359332] i8042: Warning: Keylock active
	[  +0.009389] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.476020] block sda: the capability attribute has been deprecated.
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7] <==
	{"level":"warn","ts":"2025-12-09T01:56:17.168249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.176790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.183725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.189959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.197888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.204629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.211127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.218620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.227312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.233791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.239954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.248963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.256010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.270878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.276943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.282839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:17.323569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:28.485556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:28.492118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:52.102855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:52.109519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:52.123623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T01:56:52.129739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39396","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T01:56:55.310985Z","caller":"traceutil/trace.go:172","msg":"trace[399212708] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"101.046943ms","start":"2025-12-09T01:56:55.209907Z","end":"2025-12-09T01:56:55.310954Z","steps":["trace[399212708] 'process raft request'  (duration: 98.769036ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:08.840054Z","caller":"traceutil/trace.go:172","msg":"trace[1052877146] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"133.518408ms","start":"2025-12-09T01:57:08.706509Z","end":"2025-12-09T01:57:08.840027Z","steps":["trace[1052877146] 'process raft request'  (duration: 133.40629ms)"],"step_count":1}
	
	
	==> gcp-auth [ca4804ab9c48e4ef7ece366b97cb88a4d8b446a3a1009efba581332bfacc94e8] <==
	2025/12/09 01:57:07 GCP Auth Webhook started!
	2025/12/09 01:57:18 Ready to marshal response ...
	2025/12/09 01:57:18 Ready to write response ...
	2025/12/09 01:57:18 Ready to marshal response ...
	2025/12/09 01:57:18 Ready to write response ...
	2025/12/09 01:57:18 Ready to marshal response ...
	2025/12/09 01:57:18 Ready to write response ...
	2025/12/09 01:57:36 Ready to marshal response ...
	2025/12/09 01:57:36 Ready to write response ...
	2025/12/09 01:57:38 Ready to marshal response ...
	2025/12/09 01:57:38 Ready to write response ...
	2025/12/09 01:57:39 Ready to marshal response ...
	2025/12/09 01:57:39 Ready to write response ...
	2025/12/09 01:57:39 Ready to marshal response ...
	2025/12/09 01:57:39 Ready to write response ...
	2025/12/09 01:57:46 Ready to marshal response ...
	2025/12/09 01:57:46 Ready to write response ...
	
	
	==> kernel <==
	 01:57:48 up 40 min,  0 user,  load average: 1.65, 0.86, 0.34
	Linux addons-598284 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd] <==
	I1209 01:56:27.277965       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T01:56:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 01:56:27.483563       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 01:56:27.483667       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 01:56:27.483911       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 01:56:27.484552       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 01:56:27.884970       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 01:56:27.885048       1 metrics.go:72] Registering metrics
	I1209 01:56:27.885146       1 controller.go:711] "Syncing nftables rules"
	I1209 01:56:37.484122       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:56:37.484187       1 main.go:301] handling current node
	I1209 01:56:47.483798       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:56:47.483834       1 main.go:301] handling current node
	I1209 01:56:57.483465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:56:57.483595       1 main.go:301] handling current node
	I1209 01:57:07.483780       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:57:07.483827       1 main.go:301] handling current node
	I1209 01:57:17.483624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:57:17.483692       1 main.go:301] handling current node
	I1209 01:57:27.483519       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:57:27.483551       1 main.go:301] handling current node
	I1209 01:57:37.484139       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:57:37.484180       1 main.go:301] handling current node
	I1209 01:57:47.483770       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 01:57:47.483800       1 main.go:301] handling current node
	
	
	==> kube-apiserver [16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5] <==
	E1209 01:56:38.045422       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.117.172:443: connect: connection refused" logger="UnhandledError"
	W1209 01:56:38.044363       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.117.172:443: connect: connection refused
	E1209 01:56:38.045516       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.117.172:443: connect: connection refused" logger="UnhandledError"
	W1209 01:56:38.059810       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.117.172:443: connect: connection refused
	E1209 01:56:38.060496       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.117.172:443: connect: connection refused" logger="UnhandledError"
	W1209 01:56:38.064297       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.117.172:443: connect: connection refused
	E1209 01:56:38.064331       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.117.172:443: connect: connection refused" logger="UnhandledError"
	E1209 01:56:48.018529       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.171.29:443: connect: connection refused" logger="UnhandledError"
	W1209 01:56:48.018903       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 01:56:48.018985       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1209 01:56:48.019325       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.171.29:443: connect: connection refused" logger="UnhandledError"
	E1209 01:56:48.024120       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.171.29:443: connect: connection refused" logger="UnhandledError"
	E1209 01:56:48.044681       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.171.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.171.29:443: connect: connection refused" logger="UnhandledError"
	I1209 01:56:48.118923       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1209 01:56:52.102797       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 01:56:52.109479       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 01:56:52.123561       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 01:56:52.129730       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1209 01:57:25.779350       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37064: use of closed network connection
	E1209 01:57:25.917242       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37106: use of closed network connection
	I1209 01:57:37.910952       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 01:57:38.082874       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.6.223"}
	
	
	==> kube-controller-manager [49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179] <==
	I1209 01:56:24.725777       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 01:56:24.725776       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1209 01:56:24.725822       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 01:56:24.725877       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 01:56:24.726025       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 01:56:24.727070       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 01:56:24.727103       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 01:56:24.728175       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 01:56:24.729250       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1209 01:56:24.730400       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 01:56:24.730411       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 01:56:24.730457       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1209 01:56:24.734671       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1209 01:56:24.735791       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 01:56:24.738999       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1209 01:56:24.744211       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1209 01:56:24.747441       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 01:56:24.749580       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	E1209 01:56:27.223246       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1209 01:56:39.727837       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1209 01:56:54.741181       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1209 01:56:54.741237       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1209 01:56:54.755045       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1209 01:56:54.842269       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 01:56:54.855574       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6] <==
	I1209 01:56:26.953375       1 server_linux.go:53] "Using iptables proxy"
	I1209 01:56:27.102590       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 01:56:27.205700       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 01:56:27.205738       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1209 01:56:27.205808       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 01:56:27.271823       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 01:56:27.272821       1 server_linux.go:132] "Using iptables Proxier"
	I1209 01:56:27.298466       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 01:56:27.305800       1 server.go:527] "Version info" version="v1.34.2"
	I1209 01:56:27.305836       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 01:56:27.312025       1 config.go:200] "Starting service config controller"
	I1209 01:56:27.312056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 01:56:27.312393       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 01:56:27.312418       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 01:56:27.312443       1 config.go:106] "Starting endpoint slice config controller"
	I1209 01:56:27.312449       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 01:56:27.312702       1 config.go:309] "Starting node config controller"
	I1209 01:56:27.312807       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 01:56:27.413327       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 01:56:27.413352       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 01:56:27.413362       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 01:56:27.413631       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d] <==
	E1209 01:56:17.756762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 01:56:17.756808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 01:56:17.756879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 01:56:17.756927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 01:56:17.756972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 01:56:17.756973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 01:56:17.757006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 01:56:17.757004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 01:56:17.757081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 01:56:17.757099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 01:56:17.757105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 01:56:18.582747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 01:56:18.675007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 01:56:18.689922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 01:56:18.739005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 01:56:18.795861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 01:56:18.851745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 01:56:18.853495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 01:56:18.863440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 01:56:18.867387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 01:56:18.872181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 01:56:18.877985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1209 01:56:18.899963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 01:56:18.952860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1209 01:56:19.154601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 01:57:46 addons-598284 kubelet[1282]: I1209 01:57:46.502892    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f41b0822-c45b-4395-8051-853eb9a84b69-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f41b0822-c45b-4395-8051-853eb9a84b69" (UID: "f41b0822-c45b-4395-8051-853eb9a84b69"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 09 01:57:46 addons-598284 kubelet[1282]: I1209 01:57:46.502950    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f41b0822-c45b-4395-8051-853eb9a84b69-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8" (OuterVolumeSpecName: "data") pod "f41b0822-c45b-4395-8051-853eb9a84b69" (UID: "f41b0822-c45b-4395-8051-853eb9a84b69"). InnerVolumeSpecName "pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 09 01:57:46 addons-598284 kubelet[1282]: I1209 01:57:46.502989    1282 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f41b0822-c45b-4395-8051-853eb9a84b69-gcp-creds\") on node \"addons-598284\" DevicePath \"\""
	Dec 09 01:57:46 addons-598284 kubelet[1282]: I1209 01:57:46.506168    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41b0822-c45b-4395-8051-853eb9a84b69-kube-api-access-5k5v6" (OuterVolumeSpecName: "kube-api-access-5k5v6") pod "f41b0822-c45b-4395-8051-853eb9a84b69" (UID: "f41b0822-c45b-4395-8051-853eb9a84b69"). InnerVolumeSpecName "kube-api-access-5k5v6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 09 01:57:46 addons-598284 kubelet[1282]: I1209 01:57:46.604008    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5k5v6\" (UniqueName: \"kubernetes.io/projected/f41b0822-c45b-4395-8051-853eb9a84b69-kube-api-access-5k5v6\") on node \"addons-598284\" DevicePath \"\""
	Dec 09 01:57:46 addons-598284 kubelet[1282]: I1209 01:57:46.604039    1282 reconciler_common.go:299] "Volume detached for volume \"pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8\" (UniqueName: \"kubernetes.io/host-path/f41b0822-c45b-4395-8051-853eb9a84b69-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8\") on node \"addons-598284\" DevicePath \"\""
	Dec 09 01:57:46 addons-598284 kubelet[1282]: I1209 01:57:46.906046    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a797f874-fbbb-4348-a6b2-798f327ef870-data\") pod \"helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8\" (UID: \"a797f874-fbbb-4348-a6b2-798f327ef870\") " pod="local-path-storage/helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8"
	Dec 09 01:57:46 addons-598284 kubelet[1282]: I1209 01:57:46.906119    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a797f874-fbbb-4348-a6b2-798f327ef870-gcp-creds\") pod \"helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8\" (UID: \"a797f874-fbbb-4348-a6b2-798f327ef870\") " pod="local-path-storage/helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8"
	Dec 09 01:57:46 addons-598284 kubelet[1282]: I1209 01:57:46.906148    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a797f874-fbbb-4348-a6b2-798f327ef870-script\") pod \"helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8\" (UID: \"a797f874-fbbb-4348-a6b2-798f327ef870\") " pod="local-path-storage/helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8"
	Dec 09 01:57:46 addons-598284 kubelet[1282]: I1209 01:57:46.906166    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7lxc\" (UniqueName: \"kubernetes.io/projected/a797f874-fbbb-4348-a6b2-798f327ef870-kube-api-access-s7lxc\") pod \"helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8\" (UID: \"a797f874-fbbb-4348-a6b2-798f327ef870\") " pod="local-path-storage/helper-pod-delete-pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8"
	Dec 09 01:57:47 addons-598284 kubelet[1282]: I1209 01:57:47.274338    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20e162edd0dd3d664735252ae1a79334e051e44b6180642c711a3e282a955d9a"
	Dec 09 01:57:47 addons-598284 kubelet[1282]: E1209 01:57:47.283808    1282 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-598284\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-598284' and this object" podUID="f41b0822-c45b-4395-8051-853eb9a84b69" pod="default/test-local-path"
	Dec 09 01:57:47 addons-598284 kubelet[1282]: I1209 01:57:47.886837    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f41b0822-c45b-4395-8051-853eb9a84b69" path="/var/lib/kubelet/pods/f41b0822-c45b-4395-8051-853eb9a84b69/volumes"
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.417826    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a797f874-fbbb-4348-a6b2-798f327ef870-gcp-creds\") pod \"a797f874-fbbb-4348-a6b2-798f327ef870\" (UID: \"a797f874-fbbb-4348-a6b2-798f327ef870\") "
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.417884    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a797f874-fbbb-4348-a6b2-798f327ef870-data\") pod \"a797f874-fbbb-4348-a6b2-798f327ef870\" (UID: \"a797f874-fbbb-4348-a6b2-798f327ef870\") "
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.417906    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7lxc\" (UniqueName: \"kubernetes.io/projected/a797f874-fbbb-4348-a6b2-798f327ef870-kube-api-access-s7lxc\") pod \"a797f874-fbbb-4348-a6b2-798f327ef870\" (UID: \"a797f874-fbbb-4348-a6b2-798f327ef870\") "
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.417925    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a797f874-fbbb-4348-a6b2-798f327ef870-script\") pod \"a797f874-fbbb-4348-a6b2-798f327ef870\" (UID: \"a797f874-fbbb-4348-a6b2-798f327ef870\") "
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.417973    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a797f874-fbbb-4348-a6b2-798f327ef870-data" (OuterVolumeSpecName: "data") pod "a797f874-fbbb-4348-a6b2-798f327ef870" (UID: "a797f874-fbbb-4348-a6b2-798f327ef870"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.417969    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a797f874-fbbb-4348-a6b2-798f327ef870-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a797f874-fbbb-4348-a6b2-798f327ef870" (UID: "a797f874-fbbb-4348-a6b2-798f327ef870"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.418075    1282 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a797f874-fbbb-4348-a6b2-798f327ef870-data\") on node \"addons-598284\" DevicePath \"\""
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.418093    1282 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a797f874-fbbb-4348-a6b2-798f327ef870-gcp-creds\") on node \"addons-598284\" DevicePath \"\""
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.418302    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a797f874-fbbb-4348-a6b2-798f327ef870-script" (OuterVolumeSpecName: "script") pod "a797f874-fbbb-4348-a6b2-798f327ef870" (UID: "a797f874-fbbb-4348-a6b2-798f327ef870"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.420454    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a797f874-fbbb-4348-a6b2-798f327ef870-kube-api-access-s7lxc" (OuterVolumeSpecName: "kube-api-access-s7lxc") pod "a797f874-fbbb-4348-a6b2-798f327ef870" (UID: "a797f874-fbbb-4348-a6b2-798f327ef870"). InnerVolumeSpecName "kube-api-access-s7lxc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.519298    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s7lxc\" (UniqueName: \"kubernetes.io/projected/a797f874-fbbb-4348-a6b2-798f327ef870-kube-api-access-s7lxc\") on node \"addons-598284\" DevicePath \"\""
	Dec 09 01:57:48 addons-598284 kubelet[1282]: I1209 01:57:48.519339    1282 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a797f874-fbbb-4348-a6b2-798f327ef870-script\") on node \"addons-598284\" DevicePath \"\""
	
	
	==> storage-provisioner [c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3] <==
	W1209 01:57:22.956510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:24.959371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:24.963388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:26.965685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:26.970536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:28.973537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:28.977208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:30.979473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:30.982715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:32.985161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:32.988389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:34.991080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:34.995555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:36.997903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:37.001255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:39.004387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:39.009335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:41.012561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:41.015827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:43.018778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:43.023222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:45.025808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:45.029311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:47.032711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 01:57:47.036222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-598284 -n addons-598284
helpers_test.go:269: (dbg) Run:  kubectl --context addons-598284 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-sqg9m ingress-nginx-admission-patch-xg4qv
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-598284 describe pod ingress-nginx-admission-create-sqg9m ingress-nginx-admission-patch-xg4qv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-598284 describe pod ingress-nginx-admission-create-sqg9m ingress-nginx-admission-patch-xg4qv: exit status 1 (63.537588ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sqg9m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xg4qv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-598284 describe pod ingress-nginx-admission-create-sqg9m ingress-nginx-admission-patch-xg4qv: exit status 1
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable headlamp --alsologtostderr -v=1: exit status 11 (241.3906ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:49.358532   27678 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:49.358834   27678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:49.358843   27678 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:49.358847   27678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:49.359005   27678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:49.359234   27678 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:49.359532   27678 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:49.359552   27678 addons.go:622] checking whether the cluster is paused
	I1209 01:57:49.359646   27678 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:49.359657   27678 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:49.360002   27678 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:49.378063   27678 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:49.378122   27678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:49.396254   27678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:49.488317   27678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:49.488404   27678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:49.516718   27678 cri.go:89] found id: "6755ec162936c6fcb9a5994d600f7db4c52ffc3449f321c834552bb1ab1c1756"
	I1209 01:57:49.516742   27678 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:49.516748   27678 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:49.516753   27678 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:49.516758   27678 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:49.516763   27678 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:49.516769   27678 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:49.516776   27678 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:49.516788   27678 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:49.516799   27678 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:49.516807   27678 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:49.516812   27678 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:49.516819   27678 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:49.516824   27678 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:49.516832   27678 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:49.516844   27678 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:49.516852   27678 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:49.516857   27678 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:49.516862   27678 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:49.516867   27678 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:49.516886   27678 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:49.516894   27678 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:49.516899   27678 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:49.516903   27678 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:49.516907   27678 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:49.516914   27678 cri.go:89] found id: ""
	I1209 01:57:49.516961   27678 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:49.530666   27678 out.go:203] 
	W1209 01:57:49.531852   27678 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:49.531888   27678 out.go:285] * 
	* 
	W1209 01:57:49.537346   27678 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:49.538518   27678 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:900: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-tqgf8" [526f175e-5429-478d-aa54-555b8d716d86] Running
addons_test.go:900: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003340431s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (233.953648ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:50.173788   27762 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:50.174095   27762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:50.174106   27762 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:50.174110   27762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:50.174271   27762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:50.174529   27762 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:50.174833   27762 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:50.174853   27762 addons.go:622] checking whether the cluster is paused
	I1209 01:57:50.174931   27762 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:50.174942   27762 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:50.175287   27762 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:50.192673   27762 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:50.192720   27762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:50.209711   27762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:50.301240   27762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:50.301309   27762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:50.331917   27762 cri.go:89] found id: "6755ec162936c6fcb9a5994d600f7db4c52ffc3449f321c834552bb1ab1c1756"
	I1209 01:57:50.331939   27762 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:50.331943   27762 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:50.331946   27762 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:50.331949   27762 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:50.331952   27762 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:50.331955   27762 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:50.331961   27762 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:50.331969   27762 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:50.331976   27762 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:50.331981   27762 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:50.331986   27762 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:50.331990   27762 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:50.331995   27762 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:50.332000   27762 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:50.332007   27762 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:50.332015   27762 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:50.332020   27762 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:50.332024   27762 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:50.332029   27762 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:50.332033   27762 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:50.332037   27762 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:50.332041   27762 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:50.332046   27762 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:50.332050   27762 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:50.332053   27762 cri.go:89] found id: ""
	I1209 01:57:50.332088   27762 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:50.346834   27762 out.go:203] 
	W1209 01:57:50.348163   27762 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:50.348178   27762 out.go:285] * 
	* 
	W1209 01:57:50.351102   27762 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:50.352419   27762 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:1009: (dbg) Run:  kubectl --context addons-598284 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:1015: (dbg) Run:  kubectl --context addons-598284 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:1019: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598284 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:1022: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f41b0822-c45b-4395-8051-853eb9a84b69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f41b0822-c45b-4395-8051-853eb9a84b69] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f41b0822-c45b-4395-8051-853eb9a84b69] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:1022: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003942505s
addons_test.go:1027: (dbg) Run:  kubectl --context addons-598284 get pvc test-pvc -o=json
addons_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 ssh "cat /opt/local-path-provisioner/pvc-738dc9a4-624f-4e02-8fa9-bd681d624cd8_default_test-pvc/file1"
addons_test.go:1048: (dbg) Run:  kubectl --context addons-598284 delete pod test-local-path
addons_test.go:1052: (dbg) Run:  kubectl --context addons-598284 delete pvc test-pvc
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (239.712675ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:46.925039   26686 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:46.925203   26686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:46.925214   26686 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:46.925221   26686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:46.925422   26686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:46.925709   26686 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:46.926059   26686 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:46.926079   26686 addons.go:622] checking whether the cluster is paused
	I1209 01:57:46.926172   26686 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:46.926188   26686 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:46.926552   26686 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:46.944127   26686 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:46.944174   26686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:46.961577   26686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:47.051943   26686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:47.052014   26686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:47.080747   26686 cri.go:89] found id: "6755ec162936c6fcb9a5994d600f7db4c52ffc3449f321c834552bb1ab1c1756"
	I1209 01:57:47.080772   26686 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:47.080777   26686 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:47.080786   26686 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:47.080794   26686 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:47.080798   26686 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:47.080800   26686 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:47.080803   26686 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:47.080808   26686 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:47.080816   26686 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:47.080820   26686 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:47.080824   26686 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:47.080834   26686 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:47.080839   26686 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:47.080847   26686 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:47.080860   26686 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:47.080867   26686 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:47.080874   26686 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:47.080878   26686 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:47.080880   26686 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:47.080886   26686 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:47.080888   26686 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:47.080908   26686 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:47.080916   26686 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:47.080922   26686 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:47.080930   26686 cri.go:89] found id: ""
	I1209 01:57:47.080977   26686 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:47.095750   26686 out.go:203] 
	W1209 01:57:47.097431   26686 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:47.097453   26686 out.go:285] * 
	* 
	W1209 01:57:47.100692   26686 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:47.102533   26686 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1085: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-f8kcp" [e06aa1f3-e53b-4643-93bc-b9cd45f4875e] Running
addons_test.go:1085: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003705662s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (233.729779ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:44.937479   26430 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:44.937647   26430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:44.937656   26430 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:44.937661   26430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:44.937849   26430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:44.938122   26430 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:44.938423   26430 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:44.938441   26430 addons.go:622] checking whether the cluster is paused
	I1209 01:57:44.938518   26430 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:44.938529   26430 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:44.938880   26430 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:44.955251   26430 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:44.955298   26430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:44.972237   26430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:45.062542   26430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:45.062614   26430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:45.090446   26430 cri.go:89] found id: "6755ec162936c6fcb9a5994d600f7db4c52ffc3449f321c834552bb1ab1c1756"
	I1209 01:57:45.090471   26430 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:45.090478   26430 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:45.090484   26430 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:45.090489   26430 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:45.090494   26430 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:45.090499   26430 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:45.090514   26430 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:45.090523   26430 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:45.090535   26430 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:45.090544   26430 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:45.090548   26430 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:45.090553   26430 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:45.090558   26430 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:45.090562   26430 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:45.090578   26430 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:45.090591   26430 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:45.090601   26430 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:45.090609   26430 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:45.090615   26430 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:45.090626   26430 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:45.090630   26430 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:45.090651   26430 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:45.090655   26430 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:45.090659   26430 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:45.090665   26430 cri.go:89] found id: ""
	I1209 01:57:45.090715   26430 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:45.104288   26430 out.go:203] 
	W1209 01:57:45.105511   26430 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:45.105530   26430 out.go:285] * 
	* 
	W1209 01:57:45.108484   26430 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:45.109625   26430 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1107: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-kgfgf" [20549855-347c-48cd-9c61-571d5aacb017] Running
addons_test.go:1107: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003255416s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable yakd --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable yakd --alsologtostderr -v=1: exit status 11 (229.633405ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:31.215020   24548 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:31.215179   24548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:31.215196   24548 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:31.215203   24548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:31.215371   24548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:31.215597   24548 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:31.215898   24548 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:31.215918   24548 addons.go:622] checking whether the cluster is paused
	I1209 01:57:31.215993   24548 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:31.216004   24548 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:31.216384   24548 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:31.233207   24548 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:31.233276   24548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:31.249679   24548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:31.339647   24548 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:31.339706   24548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:31.366994   24548 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:31.367016   24548 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:31.367022   24548 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:31.367027   24548 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:31.367031   24548 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:31.367036   24548 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:31.367041   24548 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:31.367046   24548 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:31.367050   24548 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:31.367058   24548 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:31.367064   24548 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:31.367072   24548 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:31.367086   24548 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:31.367095   24548 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:31.367099   24548 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:31.367112   24548 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:31.367120   24548 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:31.367126   24548 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:31.367130   24548 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:31.367134   24548 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:31.367140   24548 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:31.367144   24548 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:31.367148   24548 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:31.367152   24548 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:31.367156   24548 cri.go:89] found id: ""
	I1209 01:57:31.367203   24548 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:31.380673   24548 out.go:203] 
	W1209 01:57:31.382101   24548 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:31.382117   24548 out.go:285] * 
	* 
	W1209 01:57:31.385001   24548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:31.386258   24548 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.23s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1098: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-ftp97" [d071cb4a-2605-4817-9fd3-acecc4c70e72] Running
I1209 01:57:26.161839   14552 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 01:57:26.161862   14552 kapi.go:107] duration metric: took 3.732734ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1098: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.00293609s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-598284 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-598284 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (231.322986ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 01:57:32.215039   24624 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:57:32.215315   24624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:32.215323   24624 out.go:374] Setting ErrFile to fd 2...
	I1209 01:57:32.215327   24624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:57:32.215498   24624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:57:32.215748   24624 mustload.go:66] Loading cluster: addons-598284
	I1209 01:57:32.216055   24624 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:32.216073   24624 addons.go:622] checking whether the cluster is paused
	I1209 01:57:32.216151   24624 config.go:182] Loaded profile config "addons-598284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:57:32.216163   24624 host.go:66] Checking if "addons-598284" exists ...
	I1209 01:57:32.216534   24624 cli_runner.go:164] Run: docker container inspect addons-598284 --format={{.State.Status}}
	I1209 01:57:32.233965   24624 ssh_runner.go:195] Run: systemctl --version
	I1209 01:57:32.234010   24624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598284
	I1209 01:57:32.250520   24624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/addons-598284/id_rsa Username:docker}
	I1209 01:57:32.340875   24624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:57:32.340984   24624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:57:32.368716   24624 cri.go:89] found id: "0cf8359e032c52902340f315938de70c4fd155779ed7286a85ca8f03ac9dad3d"
	I1209 01:57:32.368735   24624 cri.go:89] found id: "09ee3d53e073920d34456d140c48179d59773d8ed1837060f44c273f8bf74440"
	I1209 01:57:32.368739   24624 cri.go:89] found id: "07327f304dd6a18c03aa3f597cd51a01b6e32840261d98f7dee6ec6d06afa092"
	I1209 01:57:32.368742   24624 cri.go:89] found id: "cd4e7d6b980f024dfd8d284d33ec6ee7d4dd6f637cf14bc3829879759ae4ecfa"
	I1209 01:57:32.368745   24624 cri.go:89] found id: "d32117de7c58e4c6388176fecb6d7824ba37d9cfec4edf39c8c967a6737289b1"
	I1209 01:57:32.368748   24624 cri.go:89] found id: "a22b2817d5b76e8cb46bf16077c02169f05a643405abc6dc59faa8e5c13dae18"
	I1209 01:57:32.368751   24624 cri.go:89] found id: "9259d8cba23be61a74933355ac84fd297f6b7ac4b5651ab5904a0a0a34e675c2"
	I1209 01:57:32.368754   24624 cri.go:89] found id: "58565aa6aebcd8e77ee185ed9788a3f0471a5d5b8067f4b07a2b2ace260ca874"
	I1209 01:57:32.368768   24624 cri.go:89] found id: "258a6b06d27dc86c72fee6932782495c7ff6666b08a2eae882764792e8a947d0"
	I1209 01:57:32.368776   24624 cri.go:89] found id: "4f60883937b8bf47f59aed3a45d25fa8b9c4cf3963072c82eaaa1d79ff92d16a"
	I1209 01:57:32.368780   24624 cri.go:89] found id: "99770ac31d14742abdf9ef316a0597c922578d17b8ccaba07802b5f6f0fecc05"
	I1209 01:57:32.368783   24624 cri.go:89] found id: "a8861dac6b0356fd655cd256c380f40994da20341ad01653dc953c851f153e0d"
	I1209 01:57:32.368786   24624 cri.go:89] found id: "c222dc3a3f27964aad73c261172ed6875e5b75e0aad1cfcad5ee1518e82fd613"
	I1209 01:57:32.368788   24624 cri.go:89] found id: "7a1b1e01077e4fc69ff3e12685fc259c5dd0fdf244abb6ebad247e1e94042595"
	I1209 01:57:32.368791   24624 cri.go:89] found id: "69b827fe1bc6eaa88a09d898c9c23e43adeefd1225ab08807242b76f10e503fa"
	I1209 01:57:32.368795   24624 cri.go:89] found id: "af1bbcbd5b2e7663f64ab54bd0ac1c17bd6f59f82c29b693cb73851bc183f9ae"
	I1209 01:57:32.368800   24624 cri.go:89] found id: "2c82ba2d18c010356279d00eb8bdcef8e7f17e55cfdbd78beff4541bc2fe74c7"
	I1209 01:57:32.368804   24624 cri.go:89] found id: "c21d5137f49f7cb41d2fc4ae53d9d51ca4f32d9d90ac05447cf0394b344c50b3"
	I1209 01:57:32.368806   24624 cri.go:89] found id: "ea6bd4352d85a19f68cd8389bcc4568dc78561e66e72ddff65a894b21510e5fd"
	I1209 01:57:32.368809   24624 cri.go:89] found id: "c951a1040b3355956343c48363cf921ae48ef4ebf1e87b69c7b8e31e66520df6"
	I1209 01:57:32.368811   24624 cri.go:89] found id: "40e0aceab5999514ebe6b2339256d289e32fa53d0e7a4253bec0cb6d3930d2e7"
	I1209 01:57:32.368814   24624 cri.go:89] found id: "49c6272ba70f52774e1d716ef3c677003f296f9638f66abb935185d356fdc179"
	I1209 01:57:32.368816   24624 cri.go:89] found id: "16e2e43c2d88bf8a1e2a2db1be719b50c154cc3cd17a467e25a0f3b660b417b5"
	I1209 01:57:32.368819   24624 cri.go:89] found id: "b5bddb335ebc68dae8b64728d338dc558cd6e355f00480c20af9145063f5d44d"
	I1209 01:57:32.368821   24624 cri.go:89] found id: ""
	I1209 01:57:32.368855   24624 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 01:57:32.382339   24624 out.go:203] 
	W1209 01:57:32.383414   24624 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T01:57:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 01:57:32.383434   24624 out.go:285] * 
	* 
	W1209 01:57:32.386315   24624 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 01:57:32.387375   24624 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-598284 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (2.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-497139 image ls --format json --alsologtostderr: (2.312808155s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497139 image ls --format json --alsologtostderr:
[]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497139 image ls --format json --alsologtostderr:
I1209 02:06:19.112757   73784 out.go:360] Setting OutFile to fd 1 ...
I1209 02:06:19.113086   73784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:06:19.113099   73784 out.go:374] Setting ErrFile to fd 2...
I1209 02:06:19.113106   73784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:06:19.113338   73784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
I1209 02:06:19.114245   73784 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:06:19.114390   73784 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:06:19.115023   73784 cli_runner.go:164] Run: docker container inspect functional-497139 --format={{.State.Status}}
I1209 02:06:19.140856   73784 ssh_runner.go:195] Run: systemctl --version
I1209 02:06:19.140919   73784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-497139
I1209 02:06:19.165109   73784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/functional-497139/id_rsa Username:docker}
I1209 02:06:19.273855   73784 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 02:06:21.311191   73784 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.037293178s)
W1209 02:06:21.311275   73784 cache_images.go:736] Failed to list images for profile functional-497139 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1209 02:06:21.306752    7405 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-12-09T02:06:21Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (2.31s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.2s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-792692 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-792692 --output=json --user=testUser: exit status 80 (2.198640124s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3780da8b-c323-4fd1-8825-a215d7aaaf6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-792692 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"cab2712c-1c24-41cd-b051-4938f45f538a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-09T02:16:10Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"f42361cf-94d3-453c-956d-a00be6e18433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-792692 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.20s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-792692 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-792692 --output=json --user=testUser: exit status 80 (1.432218399s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c8692512-176c-452b-813b-41ea0953b820","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-792692 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d8b2b286-c79d-45c5-8c64-650540bc0eeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-09T02:16:11Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"e9c75d9f-b582-4020-a551-69122eedc81e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-792692 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.43s)

                                                
                                    
x
+
TestPause/serial/Pause (7.22s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-752151 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-752151 --alsologtostderr -v=5: exit status 80 (2.396076183s)

                                                
                                                
-- stdout --
	* Pausing node pause-752151 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:30:07.842761  215396 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:30:07.843039  215396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:30:07.843052  215396 out.go:374] Setting ErrFile to fd 2...
	I1209 02:30:07.843061  215396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:30:07.843331  215396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:30:07.843604  215396 out.go:368] Setting JSON to false
	I1209 02:30:07.843622  215396 mustload.go:66] Loading cluster: pause-752151
	I1209 02:30:07.844048  215396 config.go:182] Loaded profile config "pause-752151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:30:07.844462  215396 cli_runner.go:164] Run: docker container inspect pause-752151 --format={{.State.Status}}
	I1209 02:30:07.861988  215396 host.go:66] Checking if "pause-752151" exists ...
	I1209 02:30:07.862487  215396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:30:07.925932  215396 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-09 02:30:07.91624923 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:30:07.926551  215396 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-752151 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1209 02:30:07.928115  215396 out.go:179] * Pausing node pause-752151 ... 
	I1209 02:30:07.929193  215396 host.go:66] Checking if "pause-752151" exists ...
	I1209 02:30:07.929510  215396 ssh_runner.go:195] Run: systemctl --version
	I1209 02:30:07.929561  215396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-752151
	I1209 02:30:07.947711  215396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/pause-752151/id_rsa Username:docker}
	I1209 02:30:08.041204  215396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:30:08.054225  215396 pause.go:52] kubelet running: true
	I1209 02:30:08.054287  215396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:30:08.181227  215396 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:30:08.181320  215396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:30:08.260537  215396 cri.go:89] found id: "b2d9434e92301549a225bd591e7710e4cd7c9c915ae793d42125359b5ce6da44"
	I1209 02:30:08.260565  215396 cri.go:89] found id: "7ea8f1e3efec54ec8fb1ca6b81827daf64058110d261c63699952e81433b2248"
	I1209 02:30:08.260571  215396 cri.go:89] found id: "34031729cf287c69f0576709a533d9026808701d698e1a335ab1d5dbf2c2af85"
	I1209 02:30:08.260576  215396 cri.go:89] found id: "4f7e6a985f0e8891c0a15ab60c10fb7075bff9a41bea5b50f87686763b483fed"
	I1209 02:30:08.260581  215396 cri.go:89] found id: "ee0ffb0c1cf672571af67b37f4594ca3add7081909e8ea3aee926d7721cc3136"
	I1209 02:30:08.260585  215396 cri.go:89] found id: "127fe1fa839b2c7a9b9a1201d889786cfbfce0cae2ad4c8738e15bb396fd2a20"
	I1209 02:30:08.260590  215396 cri.go:89] found id: "bb9aa5d3e80c849f40efc3ea76b2d24afc2ded01f8ed4a1d393726b86571c45f"
	I1209 02:30:08.260594  215396 cri.go:89] found id: ""
	I1209 02:30:08.260642  215396 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:30:08.280374  215396 retry.go:31] will retry after 157.720904ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:30:08Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:30:08.438847  215396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:30:08.455425  215396 pause.go:52] kubelet running: false
	I1209 02:30:08.455485  215396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:30:08.571282  215396 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:30:08.571365  215396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:30:08.641218  215396 cri.go:89] found id: "b2d9434e92301549a225bd591e7710e4cd7c9c915ae793d42125359b5ce6da44"
	I1209 02:30:08.641247  215396 cri.go:89] found id: "7ea8f1e3efec54ec8fb1ca6b81827daf64058110d261c63699952e81433b2248"
	I1209 02:30:08.641253  215396 cri.go:89] found id: "34031729cf287c69f0576709a533d9026808701d698e1a335ab1d5dbf2c2af85"
	I1209 02:30:08.641259  215396 cri.go:89] found id: "4f7e6a985f0e8891c0a15ab60c10fb7075bff9a41bea5b50f87686763b483fed"
	I1209 02:30:08.641263  215396 cri.go:89] found id: "ee0ffb0c1cf672571af67b37f4594ca3add7081909e8ea3aee926d7721cc3136"
	I1209 02:30:08.641268  215396 cri.go:89] found id: "127fe1fa839b2c7a9b9a1201d889786cfbfce0cae2ad4c8738e15bb396fd2a20"
	I1209 02:30:08.641274  215396 cri.go:89] found id: "bb9aa5d3e80c849f40efc3ea76b2d24afc2ded01f8ed4a1d393726b86571c45f"
	I1209 02:30:08.641279  215396 cri.go:89] found id: ""
	I1209 02:30:08.641324  215396 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:30:08.652916  215396 retry.go:31] will retry after 452.437528ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:30:08Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:30:09.105534  215396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:30:09.118164  215396 pause.go:52] kubelet running: false
	I1209 02:30:09.118223  215396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:30:09.228075  215396 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:30:09.228174  215396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:30:09.297403  215396 cri.go:89] found id: "b2d9434e92301549a225bd591e7710e4cd7c9c915ae793d42125359b5ce6da44"
	I1209 02:30:09.297427  215396 cri.go:89] found id: "7ea8f1e3efec54ec8fb1ca6b81827daf64058110d261c63699952e81433b2248"
	I1209 02:30:09.297434  215396 cri.go:89] found id: "34031729cf287c69f0576709a533d9026808701d698e1a335ab1d5dbf2c2af85"
	I1209 02:30:09.297439  215396 cri.go:89] found id: "4f7e6a985f0e8891c0a15ab60c10fb7075bff9a41bea5b50f87686763b483fed"
	I1209 02:30:09.297444  215396 cri.go:89] found id: "ee0ffb0c1cf672571af67b37f4594ca3add7081909e8ea3aee926d7721cc3136"
	I1209 02:30:09.297449  215396 cri.go:89] found id: "127fe1fa839b2c7a9b9a1201d889786cfbfce0cae2ad4c8738e15bb396fd2a20"
	I1209 02:30:09.297453  215396 cri.go:89] found id: "bb9aa5d3e80c849f40efc3ea76b2d24afc2ded01f8ed4a1d393726b86571c45f"
	I1209 02:30:09.297458  215396 cri.go:89] found id: ""
	I1209 02:30:09.297501  215396 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:30:09.308671  215396 retry.go:31] will retry after 544.356648ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:30:09Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:30:09.853385  215396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:30:09.867083  215396 pause.go:52] kubelet running: false
	I1209 02:30:09.867151  215396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:30:09.977813  215396 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:30:09.977907  215396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:30:10.044462  215396 cri.go:89] found id: "b2d9434e92301549a225bd591e7710e4cd7c9c915ae793d42125359b5ce6da44"
	I1209 02:30:10.044488  215396 cri.go:89] found id: "7ea8f1e3efec54ec8fb1ca6b81827daf64058110d261c63699952e81433b2248"
	I1209 02:30:10.044494  215396 cri.go:89] found id: "34031729cf287c69f0576709a533d9026808701d698e1a335ab1d5dbf2c2af85"
	I1209 02:30:10.044500  215396 cri.go:89] found id: "4f7e6a985f0e8891c0a15ab60c10fb7075bff9a41bea5b50f87686763b483fed"
	I1209 02:30:10.044505  215396 cri.go:89] found id: "ee0ffb0c1cf672571af67b37f4594ca3add7081909e8ea3aee926d7721cc3136"
	I1209 02:30:10.044509  215396 cri.go:89] found id: "127fe1fa839b2c7a9b9a1201d889786cfbfce0cae2ad4c8738e15bb396fd2a20"
	I1209 02:30:10.044514  215396 cri.go:89] found id: "bb9aa5d3e80c849f40efc3ea76b2d24afc2ded01f8ed4a1d393726b86571c45f"
	I1209 02:30:10.044518  215396 cri.go:89] found id: ""
	I1209 02:30:10.044566  215396 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:30:10.124234  215396 out.go:203] 
	W1209 02:30:10.126218  215396 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:30:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:30:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 02:30:10.126241  215396 out.go:285] * 
	* 
	W1209 02:30:10.130491  215396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 02:30:10.158079  215396 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-752151 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-752151
helpers_test.go:243: (dbg) docker inspect pause-752151:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291",
	        "Created": "2025-12-09T02:29:21.058262097Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 201727,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:29:24.63671276Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291/hostname",
	        "HostsPath": "/var/lib/docker/containers/347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291/hosts",
	        "LogPath": "/var/lib/docker/containers/347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291/347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291-json.log",
	        "Name": "/pause-752151",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-752151:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-752151",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291",
	                "LowerDir": "/var/lib/docker/overlay2/64a5e37b5e8fff60eafc2d8126d41da9f15ff7c1e0427a40c5481127c61d1145-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/64a5e37b5e8fff60eafc2d8126d41da9f15ff7c1e0427a40c5481127c61d1145/merged",
	                "UpperDir": "/var/lib/docker/overlay2/64a5e37b5e8fff60eafc2d8126d41da9f15ff7c1e0427a40c5481127c61d1145/diff",
	                "WorkDir": "/var/lib/docker/overlay2/64a5e37b5e8fff60eafc2d8126d41da9f15ff7c1e0427a40c5481127c61d1145/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-752151",
	                "Source": "/var/lib/docker/volumes/pause-752151/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-752151",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-752151",
	                "name.minikube.sigs.k8s.io": "pause-752151",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "454e03e3cf0e25e4e36d5a0e61adbb67d2c858772274ddc381358078bdb84637",
	            "SandboxKey": "/var/run/docker/netns/454e03e3cf0e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-752151": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b74d10fb4a038520f3e46fa61d173cdd8b3cb882741d030a25d7eaa411802cd",
	                    "EndpointID": "f19f89c36aecb2927b69efad6586f47186fdc60c41f462af4d4e0d55d6e7b32e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ca:ea:bb:af:bc:78",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-752151",
	                        "347e08b03701"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-752151 -n pause-752151
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-752151 -n pause-752151: exit status 2 (320.107413ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-752151 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-752151 logs -n 25: (2.097719235s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-155628 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │ 09 Dec 25 02:27 UTC │
	│ stop    │ -p scheduled-stop-155628 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --cancel-scheduled                                                                                              │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │ 09 Dec 25 02:27 UTC │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:28 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:28 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:28 UTC │ 09 Dec 25 02:28 UTC │
	│ delete  │ -p scheduled-stop-155628                                                                                                                 │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:28 UTC │ 09 Dec 25 02:28 UTC │
	│ start   │ -p insufficient-storage-342795 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-342795 │ jenkins │ v1.37.0 │ 09 Dec 25 02:28 UTC │                     │
	│ delete  │ -p insufficient-storage-342795                                                                                                           │ insufficient-storage-342795 │ jenkins │ v1.37.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:29 UTC │
	│ start   │ -p pause-752151 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-752151                │ jenkins │ v1.37.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:30 UTC │
	│ start   │ -p offline-crio-654778 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-654778         │ jenkins │ v1.37.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:30 UTC │
	│ start   │ -p stopped-upgrade-768415 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-768415      │ jenkins │ v1.35.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:29 UTC │
	│ start   │ -p missing-upgrade-857664 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-857664      │ jenkins │ v1.35.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:29 UTC │
	│ stop    │ stopped-upgrade-768415 stop                                                                                                              │ stopped-upgrade-768415      │ jenkins │ v1.35.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:29 UTC │
	│ start   │ -p missing-upgrade-857664 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-857664      │ jenkins │ v1.37.0 │ 09 Dec 25 02:29 UTC │                     │
	│ start   │ -p stopped-upgrade-768415 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-768415      │ jenkins │ v1.37.0 │ 09 Dec 25 02:29 UTC │                     │
	│ start   │ -p pause-752151 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-752151                │ jenkins │ v1.37.0 │ 09 Dec 25 02:30 UTC │ 09 Dec 25 02:30 UTC │
	│ delete  │ -p offline-crio-654778                                                                                                                   │ offline-crio-654778         │ jenkins │ v1.37.0 │ 09 Dec 25 02:30 UTC │ 09 Dec 25 02:30 UTC │
	│ start   │ -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-190944   │ jenkins │ v1.37.0 │ 09 Dec 25 02:30 UTC │                     │
	│ pause   │ -p pause-752151 --alsologtostderr -v=5                                                                                                   │ pause-752151                │ jenkins │ v1.37.0 │ 09 Dec 25 02:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:30:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:30:07.615293  215247 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:30:07.615390  215247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:30:07.615399  215247 out.go:374] Setting ErrFile to fd 2...
	I1209 02:30:07.615404  215247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:30:07.615597  215247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:30:07.616039  215247 out.go:368] Setting JSON to false
	I1209 02:30:07.617021  215247 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4357,"bootTime":1765243051,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:30:07.617069  215247 start.go:143] virtualization: kvm guest
	I1209 02:30:07.618655  215247 out.go:179] * [kubernetes-upgrade-190944] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:30:07.619814  215247 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:30:07.619841  215247 notify.go:221] Checking for updates...
	I1209 02:30:07.621821  215247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:30:07.622841  215247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:30:07.623739  215247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:30:07.624732  215247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:30:07.625663  215247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:30:07.626995  215247 config.go:182] Loaded profile config "missing-upgrade-857664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 02:30:07.627132  215247 config.go:182] Loaded profile config "pause-752151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:30:07.627219  215247 config.go:182] Loaded profile config "stopped-upgrade-768415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 02:30:07.627308  215247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:30:07.650347  215247 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:30:07.650487  215247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:30:07.711014  215247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-09 02:30:07.700512611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:30:07.711163  215247 docker.go:319] overlay module found
	I1209 02:30:07.712697  215247 out.go:179] * Using the docker driver based on user configuration
	I1209 02:30:07.300734  213860 pod_ready.go:83] waiting for pod "kube-scheduler-pause-752151" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:30:07.701232  213860 pod_ready.go:94] pod "kube-scheduler-pause-752151" is "Ready"
	I1209 02:30:07.701263  213860 pod_ready.go:86] duration metric: took 400.500798ms for pod "kube-scheduler-pause-752151" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:30:07.701279  213860 pod_ready.go:40] duration metric: took 1.604426274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:30:07.749209  213860 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 02:30:07.753763  213860 out.go:179] * Done! kubectl is now configured to use "pause-752151" cluster and "default" namespace by default
	I1209 02:30:07.713713  215247 start.go:309] selected driver: docker
	I1209 02:30:07.713732  215247 start.go:927] validating driver "docker" against <nil>
	I1209 02:30:07.713746  215247 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:30:07.714528  215247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:30:07.775102  215247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-09 02:30:07.761303101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:30:07.775333  215247 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:30:07.775614  215247 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 02:30:07.776930  215247 out.go:179] * Using Docker driver with root privileges
	I1209 02:30:07.777983  215247 cni.go:84] Creating CNI manager for ""
	I1209 02:30:07.778043  215247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:30:07.778053  215247 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 02:30:07.778109  215247 start.go:353] cluster config:
	{Name:kubernetes-upgrade-190944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-190944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:30:07.779171  215247 out.go:179] * Starting "kubernetes-upgrade-190944" primary control-plane node in "kubernetes-upgrade-190944" cluster
	I1209 02:30:07.780296  215247 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:30:07.781256  215247 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:30:07.782180  215247 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1209 02:30:07.782224  215247 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1209 02:30:07.782234  215247 cache.go:65] Caching tarball of preloaded images
	I1209 02:30:07.782294  215247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:30:07.782317  215247 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:30:07.782332  215247 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1209 02:30:07.782451  215247 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kubernetes-upgrade-190944/config.json ...
	I1209 02:30:07.782479  215247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kubernetes-upgrade-190944/config.json: {Name:mk5832f6ecaaaf2b42ba47b3e1268a9a1ef18d45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:30:07.804154  215247 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:30:07.804171  215247 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:30:07.804188  215247 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:30:07.804220  215247 start.go:360] acquireMachinesLock for kubernetes-upgrade-190944: {Name:mkdb0b72b48cd2eea012966a6a72e94ae423c0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:30:07.804334  215247 start.go:364] duration metric: took 90.334µs to acquireMachinesLock for "kubernetes-upgrade-190944"
	I1209 02:30:07.804364  215247 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-190944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-190944 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:30:07.804456  215247 start.go:125] createHost starting for "" (driver="docker")
	I1209 02:30:08.279735  210910 cli_runner.go:164] Run: docker container inspect missing-upgrade-857664 --format={{.State.Status}}
	W1209 02:30:08.298214  210910 cli_runner.go:211] docker container inspect missing-upgrade-857664 --format={{.State.Status}} returned with exit code 1
	I1209 02:30:08.298280  210910 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-857664": docker container inspect missing-upgrade-857664 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-857664
	I1209 02:30:08.298292  210910 oci.go:673] temporary error: container missing-upgrade-857664 status is  but expect it to be exited
	I1209 02:30:08.298337  210910 oci.go:88] couldn't shut down missing-upgrade-857664 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-857664": docker container inspect missing-upgrade-857664 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-857664
	 
	I1209 02:30:08.298406  210910 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-857664
	I1209 02:30:08.316172  210910 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-857664
	W1209 02:30:08.333663  210910 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-857664 returned with exit code 1
	I1209 02:30:08.333759  210910 cli_runner.go:164] Run: docker network inspect missing-upgrade-857664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:30:08.351063  210910 cli_runner.go:164] Run: docker network rm missing-upgrade-857664
	I1209 02:30:08.460880  210910 fix.go:124] Sleeping 1 second for extra luck!
	I1209 02:30:09.460995  210910 start.go:125] createHost starting for "" (driver="docker")
	I1209 02:30:09.467512  210910 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:30:09.467699  210910 start.go:159] libmachine.API.Create for "missing-upgrade-857664" (driver="docker")
	I1209 02:30:09.467743  210910 client.go:173] LocalClient.Create starting
	I1209 02:30:09.467869  210910 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:30:09.467922  210910 main.go:143] libmachine: Decoding PEM data...
	I1209 02:30:09.467961  210910 main.go:143] libmachine: Parsing certificate...
	I1209 02:30:09.468050  210910 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:30:09.468082  210910 main.go:143] libmachine: Decoding PEM data...
	I1209 02:30:09.468107  210910 main.go:143] libmachine: Parsing certificate...
	I1209 02:30:09.468430  210910 cli_runner.go:164] Run: docker network inspect missing-upgrade-857664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:30:09.962761  211634 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 02:30:09.962809  211634 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.77091022Z" level=info msg="RDT not available in the host system"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.770918753Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.77160658Z" level=info msg="Conmon does support the --sync option"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.771619013Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.771628905Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.772287499Z" level=info msg="Conmon does support the --sync option"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.772301419Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.775858185Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.775876266Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.776320577Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.77668316Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.776733313Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.846404238Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-9w5cw Namespace:kube-system ID:887eccc27a88b8316b98fa690b12287c676a479657065f130aadcb0a9a82b9e8 UID:fabd0092-8d8e-481f-b35c-4e9deed5ec10 NetNS:/var/run/netns/1ce99800-e837-4f66-bbf9-8ee1ee69cf8f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128250}] Aliases:map[]}"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.846564395Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-9w5cw for CNI network kindnet (type=ptp)"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.846974492Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847000754Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847057495Z" level=info msg="Create NRI interface"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847153304Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847165709Z" level=info msg="runtime interface created"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847178641Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847186323Z" level=info msg="runtime interface starting up..."
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847193908Z" level=info msg="starting plugins..."
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847208708Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847474091Z" level=info msg="No systemd watchdog enabled"
	Dec 09 02:30:04 pause-752151 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b2d9434e92301       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   887eccc27a88b       coredns-66bc5c9577-9w5cw               kube-system
	7ea8f1e3efec5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   694f2ca544335       kindnet-nplkf                          kube-system
	34031729cf287       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   23 seconds ago      Running             kube-proxy                0                   6c107dfe95be4       kube-proxy-8t4qw                       kube-system
	4f7e6a985f0e8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   33 seconds ago      Running             etcd                      0                   8807c8b22b2d0       etcd-pause-752151                      kube-system
	ee0ffb0c1cf67       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   33 seconds ago      Running             kube-controller-manager   0                   030bb9df9f7d1       kube-controller-manager-pause-752151   kube-system
	127fe1fa839b2       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   33 seconds ago      Running             kube-scheduler            0                   f1240030c934b       kube-scheduler-pause-752151            kube-system
	bb9aa5d3e80c8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   33 seconds ago      Running             kube-apiserver            0                   47a885a849dd4       kube-apiserver-pause-752151            kube-system
	
	
	==> coredns [b2d9434e92301549a225bd591e7710e4cd7c9c915ae793d42125359b5ce6da44] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41501 - 44238 "HINFO IN 6963736022572244806.6696722005361087735. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.124808636s
	
	
	==> describe nodes <==
	Name:               pause-752151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-752151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=pause-752151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_29_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:29:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-752151
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:30:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:29:59 +0000   Tue, 09 Dec 2025 02:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:29:59 +0000   Tue, 09 Dec 2025 02:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:29:59 +0000   Tue, 09 Dec 2025 02:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:29:59 +0000   Tue, 09 Dec 2025 02:29:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-752151
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                d5cac5cf-0be9-4969-b652-49008d1d35ad
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9w5cw                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-752151                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-nplkf                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-752151             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-752151    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-8t4qw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-752151             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node pause-752151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node pause-752151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node pause-752151 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node pause-752151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node pause-752151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node pause-752151 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node pause-752151 event: Registered Node pause-752151 in Controller
	  Normal  NodeReady                13s                kubelet          Node pause-752151 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [4f7e6a985f0e8891c0a15ab60c10fb7075bff9a41bea5b50f87686763b483fed] <==
	{"level":"warn","ts":"2025-12-09T02:29:39.737097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.744282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.759227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.766477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.773240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.779488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.787118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.794255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.802389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.819731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.825871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.833041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.840229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.853273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.859443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.866129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.872580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.879649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.886819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.894804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.901364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.916438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.924458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.934379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.992274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41766","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:30:12 up  1:12,  0 user,  load average: 3.43, 1.80, 1.36
	Linux pause-752151 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7ea8f1e3efec54ec8fb1ca6b81827daf64058110d261c63699952e81433b2248] <==
	I1209 02:29:49.198505       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:29:49.293738       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1209 02:29:49.293877       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:29:49.293896       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:29:49.293925       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:29:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:29:49.496508       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:29:49.496765       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:29:49.496917       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:29:49.497170       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:29:49.893673       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:29:49.893705       1 metrics.go:72] Registering metrics
	I1209 02:29:49.893814       1 controller.go:711] "Syncing nftables rules"
	I1209 02:29:59.498710       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:29:59.498765       1 main.go:301] handling current node
	I1209 02:30:09.496456       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:30:09.496488       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bb9aa5d3e80c849f40efc3ea76b2d24afc2ded01f8ed4a1d393726b86571c45f] <==
	I1209 02:29:40.656790       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:29:40.657309       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:29:40.658961       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:29:40.659286       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1209 02:29:40.665403       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:29:40.665587       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 02:29:40.665609       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1209 02:29:40.673958       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:29:41.563550       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1209 02:29:41.570573       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1209 02:29:41.570704       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:29:42.351807       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:29:42.399314       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:29:42.468893       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1209 02:29:42.478070       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1209 02:29:42.479362       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:29:42.484580       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:29:42.593574       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:29:43.265582       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:29:43.274135       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 02:29:43.282248       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 02:29:48.344663       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:29:48.348884       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:29:48.443069       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:29:48.641540       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ee0ffb0c1cf672571af67b37f4594ca3add7081909e8ea3aee926d7721cc3136] <==
	I1209 02:29:47.601317       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:29:47.606490       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 02:29:47.619629       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 02:29:47.628955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:29:47.629979       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1209 02:29:47.636797       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 02:29:47.640883       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 02:29:47.642016       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1209 02:29:47.642064       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1209 02:29:47.642357       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1209 02:29:47.642450       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1209 02:29:47.642791       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 02:29:47.642850       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 02:29:47.643989       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1209 02:29:47.644135       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 02:29:47.644236       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-752151"
	I1209 02:29:47.644303       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1209 02:29:47.644678       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:29:47.645210       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 02:29:47.646681       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1209 02:29:47.647265       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1209 02:29:47.647516       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 02:29:47.652726       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 02:29:47.658704       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1209 02:30:02.646735       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [34031729cf287c69f0576709a533d9026808701d698e1a335ab1d5dbf2c2af85] <==
	I1209 02:29:49.055079       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:29:49.136219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:29:49.236397       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:29:49.236432       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1209 02:29:49.236552       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:29:49.255223       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:29:49.255281       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:29:49.261863       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:29:49.262273       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:29:49.262307       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:29:49.263713       1 config.go:200] "Starting service config controller"
	I1209 02:29:49.263740       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:29:49.263796       1 config.go:309] "Starting node config controller"
	I1209 02:29:49.263819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:29:49.263895       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:29:49.263907       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:29:49.263927       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:29:49.263932       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:29:49.364795       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:29:49.364863       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:29:49.364882       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:29:49.364924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [127fe1fa839b2c7a9b9a1201d889786cfbfce0cae2ad4c8738e15bb396fd2a20] <==
	E1209 02:29:40.629337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 02:29:40.629373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 02:29:40.629433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 02:29:40.629448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 02:29:40.629446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 02:29:40.629528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 02:29:40.629580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 02:29:41.438387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 02:29:41.442741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 02:29:41.448484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 02:29:41.468945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 02:29:41.507970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 02:29:41.521414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 02:29:41.523298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 02:29:41.603796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 02:29:41.711887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 02:29:41.713469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1209 02:29:41.731087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 02:29:41.825155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 02:29:41.849044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 02:29:41.887560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 02:29:41.994236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 02:29:42.060855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 02:29:42.064008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1209 02:29:43.625496       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 02:29:44 pause-752151 kubelet[1296]: E1209 02:29:44.253487    1296 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-752151\" already exists" pod="kube-system/kube-controller-manager-pause-752151"
	Dec 09 02:29:44 pause-752151 kubelet[1296]: I1209 02:29:44.262067    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-752151" podStartSLOduration=2.262035285 podStartE2EDuration="2.262035285s" podCreationTimestamp="2025-12-09 02:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:44.252055803 +0000 UTC m=+1.179016897" watchObservedRunningTime="2025-12-09 02:29:44.262035285 +0000 UTC m=+1.188996369"
	Dec 09 02:29:44 pause-752151 kubelet[1296]: I1209 02:29:44.262238    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-752151" podStartSLOduration=1.262226241 podStartE2EDuration="1.262226241s" podCreationTimestamp="2025-12-09 02:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:44.261279886 +0000 UTC m=+1.188240977" watchObservedRunningTime="2025-12-09 02:29:44.262226241 +0000 UTC m=+1.189187337"
	Dec 09 02:29:44 pause-752151 kubelet[1296]: I1209 02:29:44.271194    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-752151" podStartSLOduration=3.271177562 podStartE2EDuration="3.271177562s" podCreationTimestamp="2025-12-09 02:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:44.271064244 +0000 UTC m=+1.198025330" watchObservedRunningTime="2025-12-09 02:29:44.271177562 +0000 UTC m=+1.198138652"
	Dec 09 02:29:44 pause-752151 kubelet[1296]: I1209 02:29:44.292276    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-752151" podStartSLOduration=1.29225519 podStartE2EDuration="1.29225519s" podCreationTimestamp="2025-12-09 02:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:44.282112575 +0000 UTC m=+1.209073666" watchObservedRunningTime="2025-12-09 02:29:44.29225519 +0000 UTC m=+1.219216283"
	Dec 09 02:29:47 pause-752151 kubelet[1296]: I1209 02:29:47.647102    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 09 02:29:47 pause-752151 kubelet[1296]: I1209 02:29:47.653424    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724431    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0cad3a92-cc65-4362-a23a-927d510294c9-kube-proxy\") pod \"kube-proxy-8t4qw\" (UID: \"0cad3a92-cc65-4362-a23a-927d510294c9\") " pod="kube-system/kube-proxy-8t4qw"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724471    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cad3a92-cc65-4362-a23a-927d510294c9-xtables-lock\") pod \"kube-proxy-8t4qw\" (UID: \"0cad3a92-cc65-4362-a23a-927d510294c9\") " pod="kube-system/kube-proxy-8t4qw"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724498    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79t84\" (UniqueName: \"kubernetes.io/projected/0cad3a92-cc65-4362-a23a-927d510294c9-kube-api-access-79t84\") pod \"kube-proxy-8t4qw\" (UID: \"0cad3a92-cc65-4362-a23a-927d510294c9\") " pod="kube-system/kube-proxy-8t4qw"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724523    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/58e1cd7f-e428-4d13-bc94-574edc64fc45-cni-cfg\") pod \"kindnet-nplkf\" (UID: \"58e1cd7f-e428-4d13-bc94-574edc64fc45\") " pod="kube-system/kindnet-nplkf"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724545    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58e1cd7f-e428-4d13-bc94-574edc64fc45-lib-modules\") pod \"kindnet-nplkf\" (UID: \"58e1cd7f-e428-4d13-bc94-574edc64fc45\") " pod="kube-system/kindnet-nplkf"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724572    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58e1cd7f-e428-4d13-bc94-574edc64fc45-xtables-lock\") pod \"kindnet-nplkf\" (UID: \"58e1cd7f-e428-4d13-bc94-574edc64fc45\") " pod="kube-system/kindnet-nplkf"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724585    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkltv\" (UniqueName: \"kubernetes.io/projected/58e1cd7f-e428-4d13-bc94-574edc64fc45-kube-api-access-zkltv\") pod \"kindnet-nplkf\" (UID: \"58e1cd7f-e428-4d13-bc94-574edc64fc45\") " pod="kube-system/kindnet-nplkf"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724599    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cad3a92-cc65-4362-a23a-927d510294c9-lib-modules\") pod \"kube-proxy-8t4qw\" (UID: \"0cad3a92-cc65-4362-a23a-927d510294c9\") " pod="kube-system/kube-proxy-8t4qw"
	Dec 09 02:29:49 pause-752151 kubelet[1296]: I1209 02:29:49.275684    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8t4qw" podStartSLOduration=1.27566307 podStartE2EDuration="1.27566307s" podCreationTimestamp="2025-12-09 02:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:49.267889372 +0000 UTC m=+6.194850474" watchObservedRunningTime="2025-12-09 02:29:49.27566307 +0000 UTC m=+6.202624160"
	Dec 09 02:29:49 pause-752151 kubelet[1296]: I1209 02:29:49.284038    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nplkf" podStartSLOduration=1.2840166499999999 podStartE2EDuration="1.28401665s" podCreationTimestamp="2025-12-09 02:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:49.275871829 +0000 UTC m=+6.202832916" watchObservedRunningTime="2025-12-09 02:29:49.28401665 +0000 UTC m=+6.210977741"
	Dec 09 02:29:59 pause-752151 kubelet[1296]: I1209 02:29:59.590214    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 09 02:29:59 pause-752151 kubelet[1296]: I1209 02:29:59.700816    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glkz7\" (UniqueName: \"kubernetes.io/projected/fabd0092-8d8e-481f-b35c-4e9deed5ec10-kube-api-access-glkz7\") pod \"coredns-66bc5c9577-9w5cw\" (UID: \"fabd0092-8d8e-481f-b35c-4e9deed5ec10\") " pod="kube-system/coredns-66bc5c9577-9w5cw"
	Dec 09 02:29:59 pause-752151 kubelet[1296]: I1209 02:29:59.700862    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fabd0092-8d8e-481f-b35c-4e9deed5ec10-config-volume\") pod \"coredns-66bc5c9577-9w5cw\" (UID: \"fabd0092-8d8e-481f-b35c-4e9deed5ec10\") " pod="kube-system/coredns-66bc5c9577-9w5cw"
	Dec 09 02:30:00 pause-752151 kubelet[1296]: I1209 02:30:00.288795    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9w5cw" podStartSLOduration=12.288774313 podStartE2EDuration="12.288774313s" podCreationTimestamp="2025-12-09 02:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:30:00.288565321 +0000 UTC m=+17.215526412" watchObservedRunningTime="2025-12-09 02:30:00.288774313 +0000 UTC m=+17.215735403"
	Dec 09 02:30:08 pause-752151 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:30:08 pause-752151 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:30:08 pause-752151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 02:30:08 pause-752151 systemd[1]: kubelet.service: Consumed 1.072s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-752151 -n pause-752151
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-752151 -n pause-752151: exit status 2 (400.762334ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-752151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-752151
helpers_test.go:243: (dbg) docker inspect pause-752151:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291",
	        "Created": "2025-12-09T02:29:21.058262097Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 201727,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:29:24.63671276Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291/hostname",
	        "HostsPath": "/var/lib/docker/containers/347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291/hosts",
	        "LogPath": "/var/lib/docker/containers/347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291/347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291-json.log",
	        "Name": "/pause-752151",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-752151:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-752151",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "347e08b037018dc2f0790b6a30246ceddc366a9c66cb291c35d546b634c8a291",
	                "LowerDir": "/var/lib/docker/overlay2/64a5e37b5e8fff60eafc2d8126d41da9f15ff7c1e0427a40c5481127c61d1145-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/64a5e37b5e8fff60eafc2d8126d41da9f15ff7c1e0427a40c5481127c61d1145/merged",
	                "UpperDir": "/var/lib/docker/overlay2/64a5e37b5e8fff60eafc2d8126d41da9f15ff7c1e0427a40c5481127c61d1145/diff",
	                "WorkDir": "/var/lib/docker/overlay2/64a5e37b5e8fff60eafc2d8126d41da9f15ff7c1e0427a40c5481127c61d1145/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-752151",
	                "Source": "/var/lib/docker/volumes/pause-752151/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-752151",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-752151",
	                "name.minikube.sigs.k8s.io": "pause-752151",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "454e03e3cf0e25e4e36d5a0e61adbb67d2c858772274ddc381358078bdb84637",
	            "SandboxKey": "/var/run/docker/netns/454e03e3cf0e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-752151": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b74d10fb4a038520f3e46fa61d173cdd8b3cb882741d030a25d7eaa411802cd",
	                    "EndpointID": "f19f89c36aecb2927b69efad6586f47186fdc60c41f462af4d4e0d55d6e7b32e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ca:ea:bb:af:bc:78",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-752151",
	                        "347e08b03701"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-752151 -n pause-752151
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-752151 -n pause-752151: exit status 2 (321.570402ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-752151 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-752151 logs -n 25: (1.157145066s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-155628 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │ 09 Dec 25 02:27 UTC │
	│ stop    │ -p scheduled-stop-155628 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --cancel-scheduled                                                                                              │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:27 UTC │ 09 Dec 25 02:27 UTC │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:28 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:28 UTC │                     │
	│ stop    │ -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:28 UTC │ 09 Dec 25 02:28 UTC │
	│ delete  │ -p scheduled-stop-155628                                                                                                                 │ scheduled-stop-155628       │ jenkins │ v1.37.0 │ 09 Dec 25 02:28 UTC │ 09 Dec 25 02:28 UTC │
	│ start   │ -p insufficient-storage-342795 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-342795 │ jenkins │ v1.37.0 │ 09 Dec 25 02:28 UTC │                     │
	│ delete  │ -p insufficient-storage-342795                                                                                                           │ insufficient-storage-342795 │ jenkins │ v1.37.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:29 UTC │
	│ start   │ -p pause-752151 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-752151                │ jenkins │ v1.37.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:30 UTC │
	│ start   │ -p offline-crio-654778 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-654778         │ jenkins │ v1.37.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:30 UTC │
	│ start   │ -p stopped-upgrade-768415 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-768415      │ jenkins │ v1.35.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:29 UTC │
	│ start   │ -p missing-upgrade-857664 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-857664      │ jenkins │ v1.35.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:29 UTC │
	│ stop    │ stopped-upgrade-768415 stop                                                                                                              │ stopped-upgrade-768415      │ jenkins │ v1.35.0 │ 09 Dec 25 02:29 UTC │ 09 Dec 25 02:29 UTC │
	│ start   │ -p missing-upgrade-857664 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-857664      │ jenkins │ v1.37.0 │ 09 Dec 25 02:29 UTC │                     │
	│ start   │ -p stopped-upgrade-768415 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-768415      │ jenkins │ v1.37.0 │ 09 Dec 25 02:29 UTC │                     │
	│ start   │ -p pause-752151 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-752151                │ jenkins │ v1.37.0 │ 09 Dec 25 02:30 UTC │ 09 Dec 25 02:30 UTC │
	│ delete  │ -p offline-crio-654778                                                                                                                   │ offline-crio-654778         │ jenkins │ v1.37.0 │ 09 Dec 25 02:30 UTC │ 09 Dec 25 02:30 UTC │
	│ start   │ -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-190944   │ jenkins │ v1.37.0 │ 09 Dec 25 02:30 UTC │                     │
	│ pause   │ -p pause-752151 --alsologtostderr -v=5                                                                                                   │ pause-752151                │ jenkins │ v1.37.0 │ 09 Dec 25 02:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:30:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:30:07.615293  215247 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:30:07.615390  215247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:30:07.615399  215247 out.go:374] Setting ErrFile to fd 2...
	I1209 02:30:07.615404  215247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:30:07.615597  215247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:30:07.616039  215247 out.go:368] Setting JSON to false
	I1209 02:30:07.617021  215247 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4357,"bootTime":1765243051,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:30:07.617069  215247 start.go:143] virtualization: kvm guest
	I1209 02:30:07.618655  215247 out.go:179] * [kubernetes-upgrade-190944] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:30:07.619814  215247 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:30:07.619841  215247 notify.go:221] Checking for updates...
	I1209 02:30:07.621821  215247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:30:07.622841  215247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:30:07.623739  215247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:30:07.624732  215247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:30:07.625663  215247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:30:07.626995  215247 config.go:182] Loaded profile config "missing-upgrade-857664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 02:30:07.627132  215247 config.go:182] Loaded profile config "pause-752151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:30:07.627219  215247 config.go:182] Loaded profile config "stopped-upgrade-768415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 02:30:07.627308  215247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:30:07.650347  215247 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:30:07.650487  215247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:30:07.711014  215247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-09 02:30:07.700512611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:30:07.711163  215247 docker.go:319] overlay module found
	I1209 02:30:07.712697  215247 out.go:179] * Using the docker driver based on user configuration
	I1209 02:30:07.300734  213860 pod_ready.go:83] waiting for pod "kube-scheduler-pause-752151" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:30:07.701232  213860 pod_ready.go:94] pod "kube-scheduler-pause-752151" is "Ready"
	I1209 02:30:07.701263  213860 pod_ready.go:86] duration metric: took 400.500798ms for pod "kube-scheduler-pause-752151" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:30:07.701279  213860 pod_ready.go:40] duration metric: took 1.604426274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:30:07.749209  213860 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 02:30:07.753763  213860 out.go:179] * Done! kubectl is now configured to use "pause-752151" cluster and "default" namespace by default
	I1209 02:30:07.713713  215247 start.go:309] selected driver: docker
	I1209 02:30:07.713732  215247 start.go:927] validating driver "docker" against <nil>
	I1209 02:30:07.713746  215247 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:30:07.714528  215247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:30:07.775102  215247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-09 02:30:07.761303101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:30:07.775333  215247 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:30:07.775614  215247 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 02:30:07.776930  215247 out.go:179] * Using Docker driver with root privileges
	I1209 02:30:07.777983  215247 cni.go:84] Creating CNI manager for ""
	I1209 02:30:07.778043  215247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:30:07.778053  215247 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 02:30:07.778109  215247 start.go:353] cluster config:
	{Name:kubernetes-upgrade-190944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-190944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:30:07.779171  215247 out.go:179] * Starting "kubernetes-upgrade-190944" primary control-plane node in "kubernetes-upgrade-190944" cluster
	I1209 02:30:07.780296  215247 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:30:07.781256  215247 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:30:07.782180  215247 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1209 02:30:07.782224  215247 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1209 02:30:07.782234  215247 cache.go:65] Caching tarball of preloaded images
	I1209 02:30:07.782294  215247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:30:07.782317  215247 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:30:07.782332  215247 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1209 02:30:07.782451  215247 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kubernetes-upgrade-190944/config.json ...
	I1209 02:30:07.782479  215247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kubernetes-upgrade-190944/config.json: {Name:mk5832f6ecaaaf2b42ba47b3e1268a9a1ef18d45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:30:07.804154  215247 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:30:07.804171  215247 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:30:07.804188  215247 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:30:07.804220  215247 start.go:360] acquireMachinesLock for kubernetes-upgrade-190944: {Name:mkdb0b72b48cd2eea012966a6a72e94ae423c0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:30:07.804334  215247 start.go:364] duration metric: took 90.334µs to acquireMachinesLock for "kubernetes-upgrade-190944"
	I1209 02:30:07.804364  215247 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-190944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-190944 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:30:07.804456  215247 start.go:125] createHost starting for "" (driver="docker")
	I1209 02:30:08.279735  210910 cli_runner.go:164] Run: docker container inspect missing-upgrade-857664 --format={{.State.Status}}
	W1209 02:30:08.298214  210910 cli_runner.go:211] docker container inspect missing-upgrade-857664 --format={{.State.Status}} returned with exit code 1
	I1209 02:30:08.298280  210910 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-857664": docker container inspect missing-upgrade-857664 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-857664
	I1209 02:30:08.298292  210910 oci.go:673] temporary error: container missing-upgrade-857664 status is  but expect it to be exited
	I1209 02:30:08.298337  210910 oci.go:88] couldn't shut down missing-upgrade-857664 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-857664": docker container inspect missing-upgrade-857664 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-857664
	 
	I1209 02:30:08.298406  210910 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-857664
	I1209 02:30:08.316172  210910 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-857664
	W1209 02:30:08.333663  210910 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-857664 returned with exit code 1
	I1209 02:30:08.333759  210910 cli_runner.go:164] Run: docker network inspect missing-upgrade-857664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:30:08.351063  210910 cli_runner.go:164] Run: docker network rm missing-upgrade-857664
	I1209 02:30:08.460880  210910 fix.go:124] Sleeping 1 second for extra luck!
	I1209 02:30:09.460995  210910 start.go:125] createHost starting for "" (driver="docker")
	I1209 02:30:09.467512  210910 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:30:09.467699  210910 start.go:159] libmachine.API.Create for "missing-upgrade-857664" (driver="docker")
	I1209 02:30:09.467743  210910 client.go:173] LocalClient.Create starting
	I1209 02:30:09.467869  210910 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:30:09.467922  210910 main.go:143] libmachine: Decoding PEM data...
	I1209 02:30:09.467961  210910 main.go:143] libmachine: Parsing certificate...
	I1209 02:30:09.468050  210910 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:30:09.468082  210910 main.go:143] libmachine: Decoding PEM data...
	I1209 02:30:09.468107  210910 main.go:143] libmachine: Parsing certificate...
	I1209 02:30:09.468430  210910 cli_runner.go:164] Run: docker network inspect missing-upgrade-857664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:30:09.962761  211634 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 02:30:09.962809  211634 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1209 02:30:07.805998  215247 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:30:07.806234  215247 start.go:159] libmachine.API.Create for "kubernetes-upgrade-190944" (driver="docker")
	I1209 02:30:07.806282  215247 client.go:173] LocalClient.Create starting
	I1209 02:30:07.806339  215247 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:30:07.806367  215247 main.go:143] libmachine: Decoding PEM data...
	I1209 02:30:07.806384  215247 main.go:143] libmachine: Parsing certificate...
	I1209 02:30:07.806442  215247 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:30:07.806464  215247 main.go:143] libmachine: Decoding PEM data...
	I1209 02:30:07.806476  215247 main.go:143] libmachine: Parsing certificate...
	I1209 02:30:07.806794  215247 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-190944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:30:07.824281  215247 cli_runner.go:211] docker network inspect kubernetes-upgrade-190944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:30:07.824338  215247 network_create.go:284] running [docker network inspect kubernetes-upgrade-190944] to gather additional debugging logs...
	I1209 02:30:07.824359  215247 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-190944
	W1209 02:30:07.842960  215247 cli_runner.go:211] docker network inspect kubernetes-upgrade-190944 returned with exit code 1
	I1209 02:30:07.842986  215247 network_create.go:287] error running [docker network inspect kubernetes-upgrade-190944]: docker network inspect kubernetes-upgrade-190944: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-190944 not found
	I1209 02:30:07.842996  215247 network_create.go:289] output of [docker network inspect kubernetes-upgrade-190944]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-190944 not found
	
	** /stderr **
	I1209 02:30:07.843088  215247 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:30:07.861356  215247 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:30:07.861798  215247 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:30:07.862335  215247 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:30:07.862933  215247 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1b74d10fb4a0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:43:f4:73:36:3b} reservation:<nil>}
	I1209 02:30:07.863584  215247 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001df7190}
	I1209 02:30:07.863610  215247 network_create.go:124] attempt to create docker network kubernetes-upgrade-190944 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1209 02:30:07.863663  215247 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-190944 kubernetes-upgrade-190944
	I1209 02:30:07.915935  215247 network_create.go:108] docker network kubernetes-upgrade-190944 192.168.85.0/24 created
	I1209 02:30:07.915967  215247 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-190944" container
	I1209 02:30:07.916043  215247 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:30:07.933370  215247 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-190944 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-190944 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:30:07.951113  215247 oci.go:103] Successfully created a docker volume kubernetes-upgrade-190944
	I1209 02:30:07.951183  215247 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-190944-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-190944 --entrypoint /usr/bin/test -v kubernetes-upgrade-190944:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:30:08.316940  215247 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-190944
	I1209 02:30:08.317010  215247 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1209 02:30:08.317025  215247 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 02:30:08.317080  215247 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-190944:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 02:30:12.227008  215247 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-190944:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.909889599s)
	I1209 02:30:12.227047  215247 kic.go:203] duration metric: took 3.910013782s to extract preloaded images to volume ...
	W1209 02:30:12.227148  215247 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:30:12.227189  215247 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:30:12.227233  215247 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:30:12.287605  215247 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-190944 --name kubernetes-upgrade-190944 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-190944 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-190944 --network kubernetes-upgrade-190944 --ip 192.168.85.2 --volume kubernetes-upgrade-190944:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	
	
	==> CRI-O <==
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.77091022Z" level=info msg="RDT not available in the host system"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.770918753Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.77160658Z" level=info msg="Conmon does support the --sync option"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.771619013Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.771628905Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.772287499Z" level=info msg="Conmon does support the --sync option"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.772301419Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.775858185Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.775876266Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.776320577Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.77668316Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.776733313Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.846404238Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-9w5cw Namespace:kube-system ID:887eccc27a88b8316b98fa690b12287c676a479657065f130aadcb0a9a82b9e8 UID:fabd0092-8d8e-481f-b35c-4e9deed5ec10 NetNS:/var/run/netns/1ce99800-e837-4f66-bbf9-8ee1ee69cf8f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128250}] Aliases:map[]}"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.846564395Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-9w5cw for CNI network kindnet (type=ptp)"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.846974492Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847000754Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847057495Z" level=info msg="Create NRI interface"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847153304Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847165709Z" level=info msg="runtime interface created"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847178641Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847186323Z" level=info msg="runtime interface starting up..."
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847193908Z" level=info msg="starting plugins..."
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847208708Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 02:30:04 pause-752151 crio[2160]: time="2025-12-09T02:30:04.847474091Z" level=info msg="No systemd watchdog enabled"
	Dec 09 02:30:04 pause-752151 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b2d9434e92301       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   887eccc27a88b       coredns-66bc5c9577-9w5cw               kube-system
	7ea8f1e3efec5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   694f2ca544335       kindnet-nplkf                          kube-system
	34031729cf287       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   25 seconds ago      Running             kube-proxy                0                   6c107dfe95be4       kube-proxy-8t4qw                       kube-system
	4f7e6a985f0e8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   35 seconds ago      Running             etcd                      0                   8807c8b22b2d0       etcd-pause-752151                      kube-system
	ee0ffb0c1cf67       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   35 seconds ago      Running             kube-controller-manager   0                   030bb9df9f7d1       kube-controller-manager-pause-752151   kube-system
	127fe1fa839b2       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   35 seconds ago      Running             kube-scheduler            0                   f1240030c934b       kube-scheduler-pause-752151            kube-system
	bb9aa5d3e80c8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   35 seconds ago      Running             kube-apiserver            0                   47a885a849dd4       kube-apiserver-pause-752151            kube-system
	
	
	==> coredns [b2d9434e92301549a225bd591e7710e4cd7c9c915ae793d42125359b5ce6da44] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41501 - 44238 "HINFO IN 6963736022572244806.6696722005361087735. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.124808636s
	
	
	==> describe nodes <==
	Name:               pause-752151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-752151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=pause-752151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_29_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:29:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-752151
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:30:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:29:59 +0000   Tue, 09 Dec 2025 02:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:29:59 +0000   Tue, 09 Dec 2025 02:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:29:59 +0000   Tue, 09 Dec 2025 02:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:29:59 +0000   Tue, 09 Dec 2025 02:29:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-752151
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                d5cac5cf-0be9-4969-b652-49008d1d35ad
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9w5cw                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-752151                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-nplkf                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-752151             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-752151    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-8t4qw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-752151             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node pause-752151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node pause-752151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node pause-752151 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node pause-752151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node pause-752151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node pause-752151 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node pause-752151 event: Registered Node pause-752151 in Controller
	  Normal  NodeReady                15s                kubelet          Node pause-752151 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [4f7e6a985f0e8891c0a15ab60c10fb7075bff9a41bea5b50f87686763b483fed] <==
	{"level":"warn","ts":"2025-12-09T02:29:39.737097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.744282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.759227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.766477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.773240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.779488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.787118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.794255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.802389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.819731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.825871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.833041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.840229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.853273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.859443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.866129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.872580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.879649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.886819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.894804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.901364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.916438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.924458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.934379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:29:39.992274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41766","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:30:14 up  1:12,  0 user,  load average: 3.72, 1.89, 1.39
	Linux pause-752151 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7ea8f1e3efec54ec8fb1ca6b81827daf64058110d261c63699952e81433b2248] <==
	I1209 02:29:49.198505       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:29:49.293738       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1209 02:29:49.293877       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:29:49.293896       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:29:49.293925       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:29:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:29:49.496508       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:29:49.496765       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:29:49.496917       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:29:49.497170       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:29:49.893673       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:29:49.893705       1 metrics.go:72] Registering metrics
	I1209 02:29:49.893814       1 controller.go:711] "Syncing nftables rules"
	I1209 02:29:59.498710       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:29:59.498765       1 main.go:301] handling current node
	I1209 02:30:09.496456       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:30:09.496488       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bb9aa5d3e80c849f40efc3ea76b2d24afc2ded01f8ed4a1d393726b86571c45f] <==
	I1209 02:29:40.656790       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:29:40.657309       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:29:40.658961       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:29:40.659286       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1209 02:29:40.665403       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:29:40.665587       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 02:29:40.665609       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1209 02:29:40.673958       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:29:41.563550       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1209 02:29:41.570573       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1209 02:29:41.570704       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:29:42.351807       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:29:42.399314       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:29:42.468893       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1209 02:29:42.478070       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1209 02:29:42.479362       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:29:42.484580       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:29:42.593574       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:29:43.265582       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:29:43.274135       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 02:29:43.282248       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 02:29:48.344663       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:29:48.348884       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:29:48.443069       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:29:48.641540       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ee0ffb0c1cf672571af67b37f4594ca3add7081909e8ea3aee926d7721cc3136] <==
	I1209 02:29:47.601317       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:29:47.606490       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 02:29:47.619629       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 02:29:47.628955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:29:47.629979       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1209 02:29:47.636797       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 02:29:47.640883       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 02:29:47.642016       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1209 02:29:47.642064       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1209 02:29:47.642357       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1209 02:29:47.642450       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1209 02:29:47.642791       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 02:29:47.642850       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 02:29:47.643989       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1209 02:29:47.644135       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 02:29:47.644236       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-752151"
	I1209 02:29:47.644303       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1209 02:29:47.644678       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:29:47.645210       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 02:29:47.646681       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1209 02:29:47.647265       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1209 02:29:47.647516       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 02:29:47.652726       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 02:29:47.658704       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1209 02:30:02.646735       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [34031729cf287c69f0576709a533d9026808701d698e1a335ab1d5dbf2c2af85] <==
	I1209 02:29:49.055079       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:29:49.136219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:29:49.236397       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:29:49.236432       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1209 02:29:49.236552       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:29:49.255223       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:29:49.255281       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:29:49.261863       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:29:49.262273       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:29:49.262307       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:29:49.263713       1 config.go:200] "Starting service config controller"
	I1209 02:29:49.263740       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:29:49.263796       1 config.go:309] "Starting node config controller"
	I1209 02:29:49.263819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:29:49.263895       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:29:49.263907       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:29:49.263927       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:29:49.263932       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:29:49.364795       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:29:49.364863       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:29:49.364882       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:29:49.364924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [127fe1fa839b2c7a9b9a1201d889786cfbfce0cae2ad4c8738e15bb396fd2a20] <==
	E1209 02:29:40.629337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 02:29:40.629373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 02:29:40.629433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 02:29:40.629448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 02:29:40.629446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 02:29:40.629528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 02:29:40.629580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 02:29:41.438387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 02:29:41.442741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 02:29:41.448484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 02:29:41.468945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 02:29:41.507970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 02:29:41.521414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 02:29:41.523298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 02:29:41.603796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 02:29:41.711887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 02:29:41.713469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1209 02:29:41.731087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 02:29:41.825155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 02:29:41.849044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 02:29:41.887560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 02:29:41.994236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 02:29:42.060855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 02:29:42.064008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1209 02:29:43.625496       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 02:29:44 pause-752151 kubelet[1296]: E1209 02:29:44.253487    1296 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-752151\" already exists" pod="kube-system/kube-controller-manager-pause-752151"
	Dec 09 02:29:44 pause-752151 kubelet[1296]: I1209 02:29:44.262067    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-752151" podStartSLOduration=2.262035285 podStartE2EDuration="2.262035285s" podCreationTimestamp="2025-12-09 02:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:44.252055803 +0000 UTC m=+1.179016897" watchObservedRunningTime="2025-12-09 02:29:44.262035285 +0000 UTC m=+1.188996369"
	Dec 09 02:29:44 pause-752151 kubelet[1296]: I1209 02:29:44.262238    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-752151" podStartSLOduration=1.262226241 podStartE2EDuration="1.262226241s" podCreationTimestamp="2025-12-09 02:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:44.261279886 +0000 UTC m=+1.188240977" watchObservedRunningTime="2025-12-09 02:29:44.262226241 +0000 UTC m=+1.189187337"
	Dec 09 02:29:44 pause-752151 kubelet[1296]: I1209 02:29:44.271194    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-752151" podStartSLOduration=3.271177562 podStartE2EDuration="3.271177562s" podCreationTimestamp="2025-12-09 02:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:44.271064244 +0000 UTC m=+1.198025330" watchObservedRunningTime="2025-12-09 02:29:44.271177562 +0000 UTC m=+1.198138652"
	Dec 09 02:29:44 pause-752151 kubelet[1296]: I1209 02:29:44.292276    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-752151" podStartSLOduration=1.29225519 podStartE2EDuration="1.29225519s" podCreationTimestamp="2025-12-09 02:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:44.282112575 +0000 UTC m=+1.209073666" watchObservedRunningTime="2025-12-09 02:29:44.29225519 +0000 UTC m=+1.219216283"
	Dec 09 02:29:47 pause-752151 kubelet[1296]: I1209 02:29:47.647102    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 09 02:29:47 pause-752151 kubelet[1296]: I1209 02:29:47.653424    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724431    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0cad3a92-cc65-4362-a23a-927d510294c9-kube-proxy\") pod \"kube-proxy-8t4qw\" (UID: \"0cad3a92-cc65-4362-a23a-927d510294c9\") " pod="kube-system/kube-proxy-8t4qw"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724471    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cad3a92-cc65-4362-a23a-927d510294c9-xtables-lock\") pod \"kube-proxy-8t4qw\" (UID: \"0cad3a92-cc65-4362-a23a-927d510294c9\") " pod="kube-system/kube-proxy-8t4qw"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724498    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79t84\" (UniqueName: \"kubernetes.io/projected/0cad3a92-cc65-4362-a23a-927d510294c9-kube-api-access-79t84\") pod \"kube-proxy-8t4qw\" (UID: \"0cad3a92-cc65-4362-a23a-927d510294c9\") " pod="kube-system/kube-proxy-8t4qw"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724523    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/58e1cd7f-e428-4d13-bc94-574edc64fc45-cni-cfg\") pod \"kindnet-nplkf\" (UID: \"58e1cd7f-e428-4d13-bc94-574edc64fc45\") " pod="kube-system/kindnet-nplkf"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724545    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58e1cd7f-e428-4d13-bc94-574edc64fc45-lib-modules\") pod \"kindnet-nplkf\" (UID: \"58e1cd7f-e428-4d13-bc94-574edc64fc45\") " pod="kube-system/kindnet-nplkf"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724572    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58e1cd7f-e428-4d13-bc94-574edc64fc45-xtables-lock\") pod \"kindnet-nplkf\" (UID: \"58e1cd7f-e428-4d13-bc94-574edc64fc45\") " pod="kube-system/kindnet-nplkf"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724585    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkltv\" (UniqueName: \"kubernetes.io/projected/58e1cd7f-e428-4d13-bc94-574edc64fc45-kube-api-access-zkltv\") pod \"kindnet-nplkf\" (UID: \"58e1cd7f-e428-4d13-bc94-574edc64fc45\") " pod="kube-system/kindnet-nplkf"
	Dec 09 02:29:48 pause-752151 kubelet[1296]: I1209 02:29:48.724599    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cad3a92-cc65-4362-a23a-927d510294c9-lib-modules\") pod \"kube-proxy-8t4qw\" (UID: \"0cad3a92-cc65-4362-a23a-927d510294c9\") " pod="kube-system/kube-proxy-8t4qw"
	Dec 09 02:29:49 pause-752151 kubelet[1296]: I1209 02:29:49.275684    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8t4qw" podStartSLOduration=1.27566307 podStartE2EDuration="1.27566307s" podCreationTimestamp="2025-12-09 02:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:49.267889372 +0000 UTC m=+6.194850474" watchObservedRunningTime="2025-12-09 02:29:49.27566307 +0000 UTC m=+6.202624160"
	Dec 09 02:29:49 pause-752151 kubelet[1296]: I1209 02:29:49.284038    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nplkf" podStartSLOduration=1.2840166499999999 podStartE2EDuration="1.28401665s" podCreationTimestamp="2025-12-09 02:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:29:49.275871829 +0000 UTC m=+6.202832916" watchObservedRunningTime="2025-12-09 02:29:49.28401665 +0000 UTC m=+6.210977741"
	Dec 09 02:29:59 pause-752151 kubelet[1296]: I1209 02:29:59.590214    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 09 02:29:59 pause-752151 kubelet[1296]: I1209 02:29:59.700816    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glkz7\" (UniqueName: \"kubernetes.io/projected/fabd0092-8d8e-481f-b35c-4e9deed5ec10-kube-api-access-glkz7\") pod \"coredns-66bc5c9577-9w5cw\" (UID: \"fabd0092-8d8e-481f-b35c-4e9deed5ec10\") " pod="kube-system/coredns-66bc5c9577-9w5cw"
	Dec 09 02:29:59 pause-752151 kubelet[1296]: I1209 02:29:59.700862    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fabd0092-8d8e-481f-b35c-4e9deed5ec10-config-volume\") pod \"coredns-66bc5c9577-9w5cw\" (UID: \"fabd0092-8d8e-481f-b35c-4e9deed5ec10\") " pod="kube-system/coredns-66bc5c9577-9w5cw"
	Dec 09 02:30:00 pause-752151 kubelet[1296]: I1209 02:30:00.288795    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9w5cw" podStartSLOduration=12.288774313 podStartE2EDuration="12.288774313s" podCreationTimestamp="2025-12-09 02:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:30:00.288565321 +0000 UTC m=+17.215526412" watchObservedRunningTime="2025-12-09 02:30:00.288774313 +0000 UTC m=+17.215735403"
	Dec 09 02:30:08 pause-752151 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:30:08 pause-752151 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:30:08 pause-752151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 02:30:08 pause-752151 systemd[1]: kubelet.service: Consumed 1.072s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-752151 -n pause-752151
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-752151 -n pause-752151: exit status 2 (331.016706ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-752151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-126117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-126117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (260.127712ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:36:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-126117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-126117 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-126117 describe deploy/metrics-server -n kube-system: exit status 1 (57.825549ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-126117 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-126117
helpers_test.go:243: (dbg) docker inspect old-k8s-version-126117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4",
	        "Created": "2025-12-09T02:35:09.203047327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282614,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:35:09.490418793Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/hostname",
	        "HostsPath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/hosts",
	        "LogPath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4-json.log",
	        "Name": "/old-k8s-version-126117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-126117:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-126117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4",
	                "LowerDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-126117",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-126117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-126117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-126117",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-126117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6d369e1287367427c7c45bcd4caaff9681c3277655bb820892d54575814bc2cb",
	            "SandboxKey": "/var/run/docker/netns/6d369e128736",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-126117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ecc05a83343c9bbe58006fef4c60d0178931361725a834370b23a8555dfe27ce",
	                    "EndpointID": "44075942c05fe342b364a598aa8c13f6a680345d81a97dc5e6be4ed62e651bf6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ae:2a:b6:3c:38:04",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-126117",
	                        "fdb4a1a34663"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-126117 -n old-k8s-version-126117
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-126117 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-126117 logs -n 25: (1.085268764s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-933067 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ ssh     │ -p cilium-933067 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ ssh     │ -p cilium-933067 sudo containerd config dump                                                                                                                                                                                                  │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ ssh     │ -p cilium-933067 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ ssh     │ -p cilium-933067 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ ssh     │ -p cilium-933067 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ ssh     │ -p cilium-933067 sudo crio config                                                                                                                                                                                                             │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ delete  │ -p cilium-933067                                                                                                                                                                                                                              │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │ 09 Dec 25 02:32 UTC │
	│ start   │ -p cert-expiration-572052 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │ 09 Dec 25 02:33 UTC │
	│ delete  │ -p stopped-upgrade-768415                                                                                                                                                                                                                     │ stopped-upgrade-768415       │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ start   │ -p force-systemd-flag-598501 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-598501    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ ssh     │ force-systemd-flag-598501 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-598501    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ delete  │ -p force-systemd-flag-598501                                                                                                                                                                                                                  │ force-systemd-flag-598501    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ start   │ -p cert-options-465214 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │                     │
	│ start   │ -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p kubernetes-upgrade-190944                                                                                                                                                                                                                  │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ ssh     │ cert-options-465214 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ ssh     │ -p cert-options-465214 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p cert-options-465214                                                                                                                                                                                                                        │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p running-upgrade-099378                                                                                                                                                                                                                     │ running-upgrade-099378       │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-126117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:35:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:35:12.078819  284952 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:35:12.079070  284952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:35:12.079083  284952 out.go:374] Setting ErrFile to fd 2...
	I1209 02:35:12.079090  284952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:35:12.079319  284952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:35:12.079877  284952 out.go:368] Setting JSON to false
	I1209 02:35:12.081141  284952 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4661,"bootTime":1765243051,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:35:12.081218  284952 start.go:143] virtualization: kvm guest
	I1209 02:35:12.082978  284952 out.go:179] * [default-k8s-diff-port-512414] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:35:12.084165  284952 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:35:12.084173  284952 notify.go:221] Checking for updates...
	I1209 02:35:12.086205  284952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:35:12.087371  284952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:35:12.088476  284952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:35:12.089692  284952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:35:12.090884  284952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:35:12.092455  284952 config.go:182] Loaded profile config "cert-expiration-572052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:35:12.092628  284952 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:35:12.092759  284952 config.go:182] Loaded profile config "old-k8s-version-126117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1209 02:35:12.092859  284952 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:35:12.117726  284952 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:35:12.117817  284952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:35:12.184409  284952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-09 02:35:12.174810479 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:35:12.184512  284952 docker.go:319] overlay module found
	I1209 02:35:12.186132  284952 out.go:179] * Using the docker driver based on user configuration
	I1209 02:35:12.187233  284952 start.go:309] selected driver: docker
	I1209 02:35:12.187246  284952 start.go:927] validating driver "docker" against <nil>
	I1209 02:35:12.187257  284952 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:35:12.187796  284952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:35:12.249816  284952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-09 02:35:12.233505887 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:35:12.250046  284952 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:35:12.250338  284952 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:35:12.253512  284952 out.go:179] * Using Docker driver with root privileges
	I1209 02:35:12.254568  284952 cni.go:84] Creating CNI manager for ""
	I1209 02:35:12.254651  284952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:35:12.254667  284952 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 02:35:12.254739  284952 start.go:353] cluster config:
	{Name:default-k8s-diff-port-512414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-512414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:35:12.256084  284952 out.go:179] * Starting "default-k8s-diff-port-512414" primary control-plane node in "default-k8s-diff-port-512414" cluster
	I1209 02:35:12.257207  284952 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:35:12.258384  284952 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:35:12.259387  284952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:35:12.259414  284952 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:35:12.259420  284952 cache.go:65] Caching tarball of preloaded images
	I1209 02:35:12.259492  284952 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:35:12.259504  284952 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:35:12.259509  284952 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:35:12.259624  284952 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/config.json ...
	I1209 02:35:12.259713  284952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/config.json: {Name:mkab78b040e2c935d4b75d8ca328541152749c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:12.281970  284952 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:35:12.281990  284952 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:35:12.282008  284952 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:35:12.282031  284952 start.go:360] acquireMachinesLock for default-k8s-diff-port-512414: {Name:mkab5f92e1212a76466842092d867a8cd62c204f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:35:12.282113  284952 start.go:364] duration metric: took 65.27µs to acquireMachinesLock for "default-k8s-diff-port-512414"
	I1209 02:35:12.282133  284952 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-512414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-512414 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:35:12.282180  284952 start.go:125] createHost starting for "" (driver="docker")
	I1209 02:35:09.118527  281066 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-126117:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (5.273629572s)
	I1209 02:35:09.118557  281066 kic.go:203] duration metric: took 5.273778957s to extract preloaded images to volume ...
	W1209 02:35:09.118685  281066 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:35:09.118740  281066 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:35:09.118788  281066 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:35:09.185519  281066 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-126117 --name old-k8s-version-126117 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-126117 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-126117 --network old-k8s-version-126117 --ip 192.168.85.2 --volume old-k8s-version-126117:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 02:35:09.753876  281066 cli_runner.go:164] Run: docker container inspect old-k8s-version-126117 --format={{.State.Running}}
	I1209 02:35:09.777524  281066 cli_runner.go:164] Run: docker container inspect old-k8s-version-126117 --format={{.State.Status}}
	I1209 02:35:09.799367  281066 cli_runner.go:164] Run: docker exec old-k8s-version-126117 stat /var/lib/dpkg/alternatives/iptables
	I1209 02:35:09.898106  281066 oci.go:144] the created container "old-k8s-version-126117" has a running status.
	I1209 02:35:09.898145  281066 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa...
	I1209 02:35:10.050586  281066 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 02:35:10.082693  281066 cli_runner.go:164] Run: docker container inspect old-k8s-version-126117 --format={{.State.Status}}
	I1209 02:35:10.112010  281066 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 02:35:10.112040  281066 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-126117 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 02:35:10.169758  281066 cli_runner.go:164] Run: docker container inspect old-k8s-version-126117 --format={{.State.Status}}
	I1209 02:35:10.192703  281066 machine.go:94] provisionDockerMachine start ...
	I1209 02:35:10.192825  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:10.213685  281066 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:10.213927  281066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1209 02:35:10.213935  281066 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:35:10.350566  281066 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-126117
	
	I1209 02:35:10.350595  281066 ubuntu.go:182] provisioning hostname "old-k8s-version-126117"
	I1209 02:35:10.350686  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:10.371975  281066 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:10.372286  281066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1209 02:35:10.372308  281066 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-126117 && echo "old-k8s-version-126117" | sudo tee /etc/hostname
	I1209 02:35:10.531968  281066 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-126117
	
	I1209 02:35:10.532067  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:10.553582  281066 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:10.554013  281066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1209 02:35:10.554067  281066 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-126117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-126117/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-126117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:35:10.696289  281066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:35:10.696321  281066 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:35:10.696343  281066 ubuntu.go:190] setting up certificates
	I1209 02:35:10.696356  281066 provision.go:84] configureAuth start
	I1209 02:35:10.696409  281066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-126117
	I1209 02:35:10.716626  281066 provision.go:143] copyHostCerts
	I1209 02:35:10.716713  281066 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:35:10.716727  281066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:35:10.716790  281066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:35:10.716901  281066 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:35:10.716910  281066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:35:10.716954  281066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:35:10.717049  281066 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:35:10.717057  281066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:35:10.717084  281066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:35:10.717176  281066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-126117 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-126117]
	I1209 02:35:10.799657  281066 provision.go:177] copyRemoteCerts
	I1209 02:35:10.799717  281066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:35:10.799764  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:10.823188  281066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:35:10.922167  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 02:35:10.945722  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:35:10.969105  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 02:35:10.990205  281066 provision.go:87] duration metric: took 293.754452ms to configureAuth
	I1209 02:35:10.990239  281066 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:35:10.990425  281066 config.go:182] Loaded profile config "old-k8s-version-126117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1209 02:35:10.990538  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:11.014726  281066 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:11.014934  281066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1209 02:35:11.014950  281066 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:35:11.453225  281066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:35:11.453251  281066 machine.go:97] duration metric: took 1.260527471s to provisionDockerMachine
	I1209 02:35:11.453264  281066 client.go:176] duration metric: took 8.312562904s to LocalClient.Create
	I1209 02:35:11.453285  281066 start.go:167] duration metric: took 8.312664439s to libmachine.API.Create "old-k8s-version-126117"
	I1209 02:35:11.453293  281066 start.go:293] postStartSetup for "old-k8s-version-126117" (driver="docker")
	I1209 02:35:11.453306  281066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:35:11.453368  281066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:35:11.453411  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:11.482428  281066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:35:11.585364  281066 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:35:11.588958  281066 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:35:11.588990  281066 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:35:11.589001  281066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:35:11.589055  281066 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:35:11.589147  281066 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:35:11.589264  281066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:35:11.596924  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:35:11.618368  281066 start.go:296] duration metric: took 165.058699ms for postStartSetup
	I1209 02:35:11.618870  281066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-126117
	I1209 02:35:11.638980  281066 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/config.json ...
	I1209 02:35:11.639233  281066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:35:11.639286  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:11.656320  281066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:35:11.747269  281066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:35:11.754769  281066 start.go:128] duration metric: took 8.616315211s to createHost
	I1209 02:35:11.754795  281066 start.go:83] releasing machines lock for "old-k8s-version-126117", held for 8.616452029s
	I1209 02:35:11.754873  281066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-126117
	I1209 02:35:11.774879  281066 ssh_runner.go:195] Run: cat /version.json
	I1209 02:35:11.774934  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:11.774936  281066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:35:11.775005  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:11.794158  281066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:35:11.794863  281066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:35:11.962502  281066 ssh_runner.go:195] Run: systemctl --version
	I1209 02:35:11.969931  281066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:35:12.006261  281066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:35:12.010869  281066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:35:12.010934  281066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:35:12.034892  281066 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 02:35:12.034913  281066 start.go:496] detecting cgroup driver to use...
	I1209 02:35:12.034941  281066 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:35:12.034981  281066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:35:12.050646  281066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:35:12.063703  281066 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:35:12.063763  281066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:35:12.082370  281066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:35:12.101517  281066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:35:12.198693  281066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:35:12.305263  281066 docker.go:234] disabling docker service ...
	I1209 02:35:12.305321  281066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:35:12.324967  281066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:35:12.338176  281066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:35:12.430454  281066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:35:12.529463  281066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:35:12.542154  281066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:35:12.556340  281066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1209 02:35:12.556394  281066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:12.568068  281066 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:35:12.568126  281066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:12.576432  281066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:12.586209  281066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:12.595894  281066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:35:12.603851  281066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:12.613211  281066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:12.627406  281066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:12.636025  281066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:35:12.642936  281066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:35:12.650135  281066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:35:12.738238  281066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:35:12.881126  281066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:35:12.881199  281066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:35:12.885880  281066 start.go:564] Will wait 60s for crictl version
	I1209 02:35:12.885946  281066 ssh_runner.go:195] Run: which crictl
	I1209 02:35:12.889433  281066 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:35:12.915163  281066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:35:12.915223  281066 ssh_runner.go:195] Run: crio --version
	I1209 02:35:12.945537  281066 ssh_runner.go:195] Run: crio --version
	I1209 02:35:12.979111  281066 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1209 02:35:09.997685  282749 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:35:09.997889  282749 start.go:159] libmachine.API.Create for "no-preload-185074" (driver="docker")
	I1209 02:35:09.997916  282749 client.go:173] LocalClient.Create starting
	I1209 02:35:09.998005  282749 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:35:09.998041  282749 main.go:143] libmachine: Decoding PEM data...
	I1209 02:35:09.998056  282749 main.go:143] libmachine: Parsing certificate...
	I1209 02:35:09.998102  282749 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:35:09.998120  282749 main.go:143] libmachine: Decoding PEM data...
	I1209 02:35:09.998132  282749 main.go:143] libmachine: Parsing certificate...
	I1209 02:35:09.998420  282749 cli_runner.go:164] Run: docker network inspect no-preload-185074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:35:10.015875  282749 cli_runner.go:211] docker network inspect no-preload-185074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:35:10.015929  282749 network_create.go:284] running [docker network inspect no-preload-185074] to gather additional debugging logs...
	I1209 02:35:10.015945  282749 cli_runner.go:164] Run: docker network inspect no-preload-185074
	W1209 02:35:10.036114  282749 cli_runner.go:211] docker network inspect no-preload-185074 returned with exit code 1
	I1209 02:35:10.036143  282749 network_create.go:287] error running [docker network inspect no-preload-185074]: docker network inspect no-preload-185074: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-185074 not found
	I1209 02:35:10.036156  282749 network_create.go:289] output of [docker network inspect no-preload-185074]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-185074 not found
	
	** /stderr **
	I1209 02:35:10.036245  282749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:35:10.058742  282749 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:35:10.059534  282749 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:35:10.060413  282749 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:35:10.061448  282749 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e7d4a9aa2f23 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:d1:3e:e7:fd:f1} reservation:<nil>}
	I1209 02:35:10.062603  282749 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ecc05a83343c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:d2:77:3b:89:79} reservation:<nil>}
	I1209 02:35:10.063372  282749 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7b2272db2afd IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:42:ea:f6:b5:7d:d6} reservation:<nil>}
	I1209 02:35:10.064827  282749 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca0c40}
	I1209 02:35:10.064856  282749 network_create.go:124] attempt to create docker network no-preload-185074 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1209 02:35:10.064908  282749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-185074 no-preload-185074
	I1209 02:35:10.127825  282749 cache.go:162] opening:  /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1209 02:35:10.128210  282749 network_create.go:108] docker network no-preload-185074 192.168.103.0/24 created
	I1209 02:35:10.128238  282749 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-185074" container
	I1209 02:35:10.128299  282749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:35:10.129019  282749 cache.go:162] opening:  /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1209 02:35:10.140944  282749 cache.go:162] opening:  /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1209 02:35:10.150855  282749 cli_runner.go:164] Run: docker volume create no-preload-185074 --label name.minikube.sigs.k8s.io=no-preload-185074 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:35:10.151477  282749 cache.go:162] opening:  /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1209 02:35:10.172985  282749 oci.go:103] Successfully created a docker volume no-preload-185074
	I1209 02:35:10.173056  282749 cli_runner.go:164] Run: docker run --rm --name no-preload-185074-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-185074 --entrypoint /usr/bin/test -v no-preload-185074:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:35:10.510421  282749 cache.go:162] opening:  /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1209 02:35:10.578196  282749 oci.go:107] Successfully prepared a docker volume no-preload-185074
	I1209 02:35:10.578241  282749 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1209 02:35:10.578321  282749 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:35:10.578356  282749 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:35:10.578402  282749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:35:10.643254  282749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-185074 --name no-preload-185074 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-185074 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-185074 --network no-preload-185074 --ip 192.168.103.2 --volume no-preload-185074:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 02:35:10.839496  282749 cache.go:157] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1209 02:35:10.839517  282749 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 867.088396ms
	I1209 02:35:10.839530  282749 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1209 02:35:10.942227  282749 cli_runner.go:164] Run: docker container inspect no-preload-185074 --format={{.State.Running}}
	I1209 02:35:10.963670  282749 cli_runner.go:164] Run: docker container inspect no-preload-185074 --format={{.State.Status}}
	I1209 02:35:10.983540  282749 cli_runner.go:164] Run: docker exec no-preload-185074 stat /var/lib/dpkg/alternatives/iptables
	I1209 02:35:11.035287  282749 oci.go:144] the created container "no-preload-185074" has a running status.
	I1209 02:35:11.035315  282749 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/no-preload-185074/id_rsa...
	I1209 02:35:11.050702  282749 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/no-preload-185074/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 02:35:11.078665  282749 cli_runner.go:164] Run: docker container inspect no-preload-185074 --format={{.State.Status}}
	I1209 02:35:11.105235  282749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 02:35:11.105257  282749 kic_runner.go:114] Args: [docker exec --privileged no-preload-185074 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 02:35:11.150371  282749 cli_runner.go:164] Run: docker container inspect no-preload-185074 --format={{.State.Status}}
	I1209 02:35:11.177723  282749 machine.go:94] provisionDockerMachine start ...
	I1209 02:35:11.177827  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:11.207183  282749 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:11.207545  282749 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1209 02:35:11.207569  282749 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:35:11.208521  282749 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52206->127.0.0.1:33063: read: connection reset by peer
	I1209 02:35:11.441133  282749 cache.go:157] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1209 02:35:11.441169  282749 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.46863107s
	I1209 02:35:11.441196  282749 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1209 02:35:11.476872  282749 cache.go:157] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1209 02:35:11.476903  282749 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.504411095s
	I1209 02:35:11.476923  282749 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1209 02:35:11.498712  282749 cache.go:157] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1209 02:35:11.498758  282749 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.526252276s
	I1209 02:35:11.498773  282749 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1209 02:35:11.503282  282749 cache.go:157] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1209 02:35:11.503315  282749 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.530843607s
	I1209 02:35:11.503332  282749 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1209 02:35:11.503350  282749 cache.go:87] Successfully saved all images to host disk.
	I1209 02:35:14.346390  282749 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-185074
	
	I1209 02:35:14.346418  282749 ubuntu.go:182] provisioning hostname "no-preload-185074"
	I1209 02:35:14.346474  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:14.367022  282749 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:14.367310  282749 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1209 02:35:14.367332  282749 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-185074 && echo "no-preload-185074" | sudo tee /etc/hostname
	I1209 02:35:14.515102  282749 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-185074
	
	I1209 02:35:14.515195  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:14.534977  282749 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:14.535218  282749 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1209 02:35:14.535251  282749 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-185074' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-185074/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-185074' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:35:14.669500  282749 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:35:14.669530  282749 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:35:14.669549  282749 ubuntu.go:190] setting up certificates
	I1209 02:35:14.669560  282749 provision.go:84] configureAuth start
	I1209 02:35:14.669616  282749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-185074
	I1209 02:35:14.689128  282749 provision.go:143] copyHostCerts
	I1209 02:35:14.689191  282749 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:35:14.689200  282749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:35:14.689274  282749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:35:14.689376  282749 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:35:14.689387  282749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:35:14.689421  282749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:35:14.689493  282749 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:35:14.689503  282749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:35:14.689534  282749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:35:14.689597  282749 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.no-preload-185074 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-185074]
	I1209 02:35:12.283699  284952 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:35:12.283897  284952 start.go:159] libmachine.API.Create for "default-k8s-diff-port-512414" (driver="docker")
	I1209 02:35:12.283928  284952 client.go:173] LocalClient.Create starting
	I1209 02:35:12.283983  284952 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:35:12.284013  284952 main.go:143] libmachine: Decoding PEM data...
	I1209 02:35:12.284035  284952 main.go:143] libmachine: Parsing certificate...
	I1209 02:35:12.284078  284952 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:35:12.284095  284952 main.go:143] libmachine: Decoding PEM data...
	I1209 02:35:12.284106  284952 main.go:143] libmachine: Parsing certificate...
	I1209 02:35:12.284396  284952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-512414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:35:12.300604  284952 cli_runner.go:211] docker network inspect default-k8s-diff-port-512414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:35:12.300681  284952 network_create.go:284] running [docker network inspect default-k8s-diff-port-512414] to gather additional debugging logs...
	I1209 02:35:12.300703  284952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-512414
	W1209 02:35:12.316935  284952 cli_runner.go:211] docker network inspect default-k8s-diff-port-512414 returned with exit code 1
	I1209 02:35:12.316963  284952 network_create.go:287] error running [docker network inspect default-k8s-diff-port-512414]: docker network inspect default-k8s-diff-port-512414: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-512414 not found
	I1209 02:35:12.316980  284952 network_create.go:289] output of [docker network inspect default-k8s-diff-port-512414]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-512414 not found
	
	** /stderr **
	I1209 02:35:12.317070  284952 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:35:12.335207  284952 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:35:12.335955  284952 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:35:12.336721  284952 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:35:12.337490  284952 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc5500}
	I1209 02:35:12.337514  284952 network_create.go:124] attempt to create docker network default-k8s-diff-port-512414 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1209 02:35:12.337574  284952 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-512414 default-k8s-diff-port-512414
	I1209 02:35:12.389486  284952 network_create.go:108] docker network default-k8s-diff-port-512414 192.168.76.0/24 created
	I1209 02:35:12.389514  284952 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-512414" container
	I1209 02:35:12.389575  284952 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:35:12.407043  284952 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-512414 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-512414 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:35:12.424368  284952 oci.go:103] Successfully created a docker volume default-k8s-diff-port-512414
	I1209 02:35:12.424460  284952 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-512414-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-512414 --entrypoint /usr/bin/test -v default-k8s-diff-port-512414:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:35:12.828598  284952 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-512414
	I1209 02:35:12.828723  284952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:35:12.828738  284952 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 02:35:12.828827  284952 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-512414:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 02:35:16.756151  284952 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-512414:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.927249123s)
	I1209 02:35:16.756186  284952 kic.go:203] duration metric: took 3.927443273s to extract preloaded images to volume ...
	W1209 02:35:16.756271  284952 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:35:16.756305  284952 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:35:16.756350  284952 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:35:16.819390  284952 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-512414 --name default-k8s-diff-port-512414 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-512414 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-512414 --network default-k8s-diff-port-512414 --ip 192.168.76.2 --volume default-k8s-diff-port-512414:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 02:35:14.789438  282749 provision.go:177] copyRemoteCerts
	I1209 02:35:14.789492  282749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:35:14.789525  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:14.809943  282749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/no-preload-185074/id_rsa Username:docker}
	I1209 02:35:14.908208  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:35:14.931831  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 02:35:14.950840  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 02:35:14.968861  282749 provision.go:87] duration metric: took 299.282418ms to configureAuth
	I1209 02:35:14.968885  282749 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:35:14.969018  282749 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:35:14.969109  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:14.987801  282749 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:14.988059  282749 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1209 02:35:14.988083  282749 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:35:15.432801  282749 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:35:15.432832  282749 machine.go:97] duration metric: took 4.255082584s to provisionDockerMachine
	I1209 02:35:15.432846  282749 client.go:176] duration metric: took 5.434923024s to LocalClient.Create
	I1209 02:35:15.432871  282749 start.go:167] duration metric: took 5.434980485s to libmachine.API.Create "no-preload-185074"
	I1209 02:35:15.432917  282749 start.go:293] postStartSetup for "no-preload-185074" (driver="docker")
	I1209 02:35:15.432941  282749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:35:15.433032  282749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:35:15.433083  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:15.450736  282749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/no-preload-185074/id_rsa Username:docker}
	I1209 02:35:15.571496  282749 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:35:15.575051  282749 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:35:15.575077  282749 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:35:15.575087  282749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:35:15.575144  282749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:35:15.575226  282749 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:35:15.575352  282749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:35:15.582662  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:35:15.714858  282749 start.go:296] duration metric: took 281.921465ms for postStartSetup
	I1209 02:35:15.717147  282749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-185074
	I1209 02:35:15.737481  282749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/config.json ...
	I1209 02:35:15.737766  282749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:35:15.737820  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:15.756097  282749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/no-preload-185074/id_rsa Username:docker}
	I1209 02:35:15.845182  282749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:35:15.849421  282749 start.go:128] duration metric: took 5.853336009s to createHost
	I1209 02:35:15.849455  282749 start.go:83] releasing machines lock for "no-preload-185074", held for 5.853488716s
	I1209 02:35:15.849521  282749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-185074
	I1209 02:35:15.866143  282749 ssh_runner.go:195] Run: cat /version.json
	I1209 02:35:15.866166  282749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:35:15.866196  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:15.866228  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:15.883462  282749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/no-preload-185074/id_rsa Username:docker}
	I1209 02:35:15.884443  282749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/no-preload-185074/id_rsa Username:docker}
	I1209 02:35:16.028474  282749 ssh_runner.go:195] Run: systemctl --version
	I1209 02:35:16.034741  282749 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:35:16.065978  282749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:35:16.070297  282749 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:35:16.070360  282749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:35:16.392567  282749 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 02:35:16.392591  282749 start.go:496] detecting cgroup driver to use...
	I1209 02:35:16.392627  282749 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:35:16.392693  282749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:35:16.409058  282749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:35:16.421836  282749 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:35:16.421903  282749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:35:16.438862  282749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:35:16.457589  282749 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:35:16.541265  282749 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:35:16.658687  282749 docker.go:234] disabling docker service ...
	I1209 02:35:16.658766  282749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:35:16.676565  282749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:35:16.688170  282749 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:35:16.811630  282749 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:35:16.917615  282749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:35:16.930392  282749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:35:16.945901  282749 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:35:16.945955  282749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:16.958263  282749 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:35:16.958321  282749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:16.969234  282749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:16.982584  282749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:16.991700  282749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:35:16.999835  282749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:17.008847  282749 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:17.022494  282749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:17.033052  282749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:35:17.040598  282749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:35:17.047875  282749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:35:17.145262  282749 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:35:17.290960  282749 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:35:17.291039  282749 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:35:17.294992  282749 start.go:564] Will wait 60s for crictl version
	I1209 02:35:17.295050  282749 ssh_runner.go:195] Run: which crictl
	I1209 02:35:17.298586  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:35:17.328009  282749 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:35:17.328104  282749 ssh_runner.go:195] Run: crio --version
	I1209 02:35:17.365111  282749 ssh_runner.go:195] Run: crio --version
	I1209 02:35:17.412228  282749 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 02:35:12.980577  281066 cli_runner.go:164] Run: docker network inspect old-k8s-version-126117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:35:12.999795  281066 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1209 02:35:13.004745  281066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:35:13.015795  281066 kubeadm.go:884] updating cluster {Name:old-k8s-version-126117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-126117 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:35:13.016000  281066 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1209 02:35:13.016102  281066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:35:13.051895  281066 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:35:13.051917  281066 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:35:13.051973  281066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:35:13.078366  281066 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:35:13.078387  281066 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:35:13.078396  281066 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1209 02:35:13.078503  281066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-126117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-126117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:35:13.078579  281066 ssh_runner.go:195] Run: crio config
	I1209 02:35:13.131136  281066 cni.go:84] Creating CNI manager for ""
	I1209 02:35:13.131164  281066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:35:13.131186  281066 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:35:13.131214  281066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-126117 NodeName:old-k8s-version-126117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:35:13.131378  281066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-126117"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:35:13.131448  281066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1209 02:35:13.140033  281066 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:35:13.140115  281066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:35:13.148047  281066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1209 02:35:13.160794  281066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:35:13.176087  281066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1209 02:35:13.188645  281066 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:35:13.192585  281066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:35:13.202526  281066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:35:13.295870  281066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:35:13.328995  281066 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117 for IP: 192.168.85.2
	I1209 02:35:13.329028  281066 certs.go:195] generating shared ca certs ...
	I1209 02:35:13.329050  281066 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:13.329224  281066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:35:13.329280  281066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:35:13.329294  281066 certs.go:257] generating profile certs ...
	I1209 02:35:13.329347  281066 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/client.key
	I1209 02:35:13.329359  281066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/client.crt with IP's: []
	I1209 02:35:13.489631  281066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/client.crt ...
	I1209 02:35:13.489663  281066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/client.crt: {Name:mkd55449eb568c6b3a189d1fdbf6512b9629acbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:13.489849  281066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/client.key ...
	I1209 02:35:13.489865  281066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/client.key: {Name:mk22695b08d906fa9fc6418dff223f0f28b7c893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:13.489977  281066 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.key.e2a63e78
	I1209 02:35:13.489994  281066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.crt.e2a63e78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1209 02:35:13.679461  281066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.crt.e2a63e78 ...
	I1209 02:35:13.679487  281066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.crt.e2a63e78: {Name:mke3fe1b2527787923b0eb87d9f105dbc6381664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:13.679653  281066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.key.e2a63e78 ...
	I1209 02:35:13.679675  281066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.key.e2a63e78: {Name:mkeeb14579b5c8a3443973c168ba51a87d5eaf53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:13.679794  281066 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.crt.e2a63e78 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.crt
	I1209 02:35:13.679872  281066 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.key.e2a63e78 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.key
	I1209 02:35:13.679931  281066 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/proxy-client.key
	I1209 02:35:13.679946  281066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/proxy-client.crt with IP's: []
	I1209 02:35:13.814064  281066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/proxy-client.crt ...
	I1209 02:35:13.814090  281066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/proxy-client.crt: {Name:mk3be7d313af882a39b2bc89553814989039bd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:13.814252  281066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/proxy-client.key ...
	I1209 02:35:13.814269  281066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/proxy-client.key: {Name:mk920e6c9793a43b66d93afb00daafc91e82106f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:13.814469  281066 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:35:13.814507  281066 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:35:13.814523  281066 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:35:13.814553  281066 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:35:13.814578  281066 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:35:13.814603  281066 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:35:13.814656  281066 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:35:13.815222  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:35:13.832987  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:35:13.850889  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:35:13.868097  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:35:13.884568  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 02:35:13.901789  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:35:13.919378  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:35:13.936950  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/old-k8s-version-126117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:35:13.953336  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:35:13.975536  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:35:13.992743  281066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:35:14.009767  281066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:35:14.022178  281066 ssh_runner.go:195] Run: openssl version
	I1209 02:35:14.028164  281066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:35:14.035435  281066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:35:14.042760  281066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:35:14.046279  281066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:35:14.046330  281066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:35:14.080216  281066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:35:14.087758  281066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145522.pem /etc/ssl/certs/3ec20f2e.0
	I1209 02:35:14.094919  281066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:14.101758  281066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:35:14.108756  281066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:14.112297  281066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:14.112349  281066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:14.147155  281066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:35:14.154663  281066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 02:35:14.161768  281066 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:35:14.169189  281066 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:35:14.176275  281066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:35:14.180076  281066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:35:14.180131  281066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:35:14.223387  281066 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:35:14.231774  281066 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/51391683.0
	I1209 02:35:14.240038  281066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:35:14.243959  281066 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 02:35:14.244029  281066 kubeadm.go:401] StartCluster: {Name:old-k8s-version-126117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-126117 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:35:14.244107  281066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:35:14.244144  281066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:35:14.271978  281066 cri.go:89] found id: ""
	I1209 02:35:14.272058  281066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:35:14.279962  281066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 02:35:14.287574  281066 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 02:35:14.287650  281066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 02:35:14.294991  281066 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 02:35:14.295008  281066 kubeadm.go:158] found existing configuration files:
	
	I1209 02:35:14.295048  281066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 02:35:14.302359  281066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 02:35:14.302408  281066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 02:35:14.309207  281066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 02:35:14.316372  281066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 02:35:14.316418  281066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 02:35:14.323385  281066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 02:35:14.330434  281066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 02:35:14.330484  281066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 02:35:14.337460  281066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 02:35:14.344829  281066 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 02:35:14.344881  281066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 02:35:14.352787  281066 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 02:35:14.436091  281066 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1209 02:35:14.509255  281066 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 02:35:17.413334  282749 cli_runner.go:164] Run: docker network inspect no-preload-185074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:35:17.440592  282749 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1209 02:35:17.445263  282749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:35:17.459057  282749 kubeadm.go:884] updating cluster {Name:no-preload-185074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-185074 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:35:17.459178  282749 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 02:35:17.459216  282749 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:35:17.492110  282749 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1209 02:35:17.492140  282749 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 02:35:17.492190  282749 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:35:17.492224  282749 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 02:35:17.492432  282749 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1209 02:35:17.492483  282749 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 02:35:17.492573  282749 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1209 02:35:17.492755  282749 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 02:35:17.492834  282749 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 02:35:17.492760  282749 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 02:35:17.493690  282749 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 02:35:17.494212  282749 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1209 02:35:17.494216  282749 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 02:35:17.494267  282749 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1209 02:35:17.494676  282749 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 02:35:17.494809  282749 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 02:35:17.495069  282749 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:35:17.495137  282749 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 02:35:17.623391  282749 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1209 02:35:17.635790  282749 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 02:35:17.636394  282749 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 02:35:17.638576  282749 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1209 02:35:17.651141  282749 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 02:35:17.656998  282749 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 02:35:17.660492  282749 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1209 02:35:17.672225  282749 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1209 02:35:17.672265  282749 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1209 02:35:17.672306  282749 ssh_runner.go:195] Run: which crictl
	I1209 02:35:17.689804  282749 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1209 02:35:17.689860  282749 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 02:35:17.689911  282749 ssh_runner.go:195] Run: which crictl
	I1209 02:35:17.692973  282749 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1209 02:35:17.693010  282749 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 02:35:17.693054  282749 ssh_runner.go:195] Run: which crictl
	I1209 02:35:17.702540  282749 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1209 02:35:17.702607  282749 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 02:35:17.702680  282749 ssh_runner.go:195] Run: which crictl
	I1209 02:35:17.719368  282749 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1209 02:35:17.719420  282749 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 02:35:17.719424  282749 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1209 02:35:17.719446  282749 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 02:35:17.719473  282749 ssh_runner.go:195] Run: which crictl
	I1209 02:35:17.719473  282749 ssh_runner.go:195] Run: which crictl
	I1209 02:35:17.719621  282749 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1209 02:35:17.719668  282749 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1209 02:35:17.719705  282749 ssh_runner.go:195] Run: which crictl
	I1209 02:35:17.719729  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1209 02:35:17.719799  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 02:35:17.719812  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 02:35:17.719829  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1209 02:35:17.725874  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 02:35:17.725920  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 02:35:17.762896  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1209 02:35:17.764725  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 02:35:17.764813  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 02:35:17.764843  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1209 02:35:17.764886  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1209 02:35:17.764934  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 02:35:17.764985  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 02:35:17.809040  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1209 02:35:17.809375  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 02:35:17.813691  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1209 02:35:17.817828  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1209 02:35:17.817932  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 02:35:17.818009  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 02:35:17.818020  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 02:35:17.864848  282749 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1209 02:35:17.864971  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1209 02:35:17.865068  282749 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1209 02:35:17.865146  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1209 02:35:17.865239  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1209 02:35:17.871997  282749 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1209 02:35:17.872023  282749 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1209 02:35:17.872057  282749 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1209 02:35:17.872083  282749 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1209 02:35:17.872099  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1209 02:35:17.872104  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1209 02:35:17.872135  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1209 02:35:17.872156  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1209 02:35:17.872163  282749 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1209 02:35:17.872176  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1209 02:35:17.872193  282749 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1209 02:35:17.872212  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1209 02:35:17.909239  282749 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1209 02:35:17.909281  282749 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1209 02:35:17.909312  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1209 02:35:17.909332  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1209 02:35:17.909354  282749 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1209 02:35:17.909379  282749 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1209 02:35:17.909406  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1209 02:35:17.909420  282749 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1209 02:35:17.909418  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1209 02:35:17.909434  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1209 02:35:17.952995  282749 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:35:18.066389  282749 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1209 02:35:18.066438  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1209 02:35:18.085155  282749 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 02:35:18.085203  282749 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:35:18.085271  282749 ssh_runner.go:195] Run: which crictl
	I1209 02:35:18.132518  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:35:18.150672  282749 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1209 02:35:18.150761  282749 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1209 02:35:18.198959  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:35:18.646096  282749 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1209 02:35:18.646136  282749 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1209 02:35:18.646184  282749 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1209 02:35:18.646213  282749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:35:17.104529  284952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Running}}
	I1209 02:35:17.125691  284952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:35:17.146243  284952 cli_runner.go:164] Run: docker exec default-k8s-diff-port-512414 stat /var/lib/dpkg/alternatives/iptables
	I1209 02:35:17.217182  284952 oci.go:144] the created container "default-k8s-diff-port-512414" has a running status.
	I1209 02:35:17.217210  284952 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa...
	I1209 02:35:17.308776  284952 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 02:35:17.342085  284952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:35:17.362948  284952 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 02:35:17.363020  284952 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-512414 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 02:35:17.420593  284952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:35:17.444858  284952 machine.go:94] provisionDockerMachine start ...
	I1209 02:35:17.444947  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:17.466875  284952 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:17.467208  284952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1209 02:35:17.467228  284952 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:35:17.611418  284952 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-512414
	
	I1209 02:35:17.611448  284952 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-512414"
	I1209 02:35:17.611532  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:17.631533  284952 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:17.631820  284952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1209 02:35:17.631845  284952 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-512414 && echo "default-k8s-diff-port-512414" | sudo tee /etc/hostname
	I1209 02:35:17.807971  284952 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-512414
	
	I1209 02:35:17.808079  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:17.834945  284952 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:17.835280  284952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1209 02:35:17.835316  284952 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-512414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-512414/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-512414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:35:17.989096  284952 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:35:17.989122  284952 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:35:17.989144  284952 ubuntu.go:190] setting up certificates
	I1209 02:35:17.989157  284952 provision.go:84] configureAuth start
	I1209 02:35:17.989216  284952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-512414
	I1209 02:35:18.011070  284952 provision.go:143] copyHostCerts
	I1209 02:35:18.011133  284952 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:35:18.011158  284952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:35:18.011234  284952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:35:18.011354  284952 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:35:18.011366  284952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:35:18.011411  284952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:35:18.011506  284952 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:35:18.011516  284952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:35:18.011560  284952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:35:18.011678  284952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-512414 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-512414 localhost minikube]
	I1209 02:35:18.151058  284952 provision.go:177] copyRemoteCerts
	I1209 02:35:18.151122  284952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:35:18.151170  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:18.173529  284952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa Username:docker}
	I1209 02:35:18.281628  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 02:35:18.341846  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:35:18.360612  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 02:35:18.378756  284952 provision.go:87] duration metric: took 389.582753ms to configureAuth
	I1209 02:35:18.378798  284952 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:35:18.378997  284952 config.go:182] Loaded profile config "default-k8s-diff-port-512414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:35:18.379107  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:18.403818  284952 main.go:143] libmachine: Using SSH client type: native
	I1209 02:35:18.404182  284952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1209 02:35:18.404230  284952 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:35:18.729034  284952 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:35:18.729060  284952 machine.go:97] duration metric: took 1.284183626s to provisionDockerMachine
	I1209 02:35:18.729073  284952 client.go:176] duration metric: took 6.445136608s to LocalClient.Create
	I1209 02:35:18.729105  284952 start.go:167] duration metric: took 6.445206372s to libmachine.API.Create "default-k8s-diff-port-512414"
	I1209 02:35:18.729118  284952 start.go:293] postStartSetup for "default-k8s-diff-port-512414" (driver="docker")
	I1209 02:35:18.729131  284952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:35:18.729219  284952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:35:18.729267  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:18.747943  284952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa Username:docker}
	I1209 02:35:18.843924  284952 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:35:18.847548  284952 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:35:18.847579  284952 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:35:18.847591  284952 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:35:18.847667  284952 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:35:18.847762  284952 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:35:18.847883  284952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:35:18.855227  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:35:18.874907  284952 start.go:296] duration metric: took 145.775784ms for postStartSetup
	I1209 02:35:18.875243  284952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-512414
	I1209 02:35:18.893039  284952 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/config.json ...
	I1209 02:35:18.893353  284952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:35:18.893415  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:18.910903  284952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa Username:docker}
	I1209 02:35:19.002180  284952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:35:19.008762  284952 start.go:128] duration metric: took 6.726566618s to createHost
	I1209 02:35:19.008795  284952 start.go:83] releasing machines lock for "default-k8s-diff-port-512414", held for 6.726669697s
	I1209 02:35:19.008861  284952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-512414
	I1209 02:35:19.031040  284952 ssh_runner.go:195] Run: cat /version.json
	I1209 02:35:19.031113  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:19.031191  284952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:35:19.031289  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:19.055469  284952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa Username:docker}
	I1209 02:35:19.056308  284952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa Username:docker}
	I1209 02:35:19.155975  284952 ssh_runner.go:195] Run: systemctl --version
	I1209 02:35:19.242456  284952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:35:19.290237  284952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:35:19.299888  284952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:35:19.299947  284952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:35:19.334133  284952 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 02:35:19.334205  284952 start.go:496] detecting cgroup driver to use...
	I1209 02:35:19.334249  284952 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:35:19.334318  284952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:35:19.357119  284952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:35:19.373672  284952 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:35:19.373742  284952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:35:19.398334  284952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:35:19.420718  284952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:35:19.551670  284952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:35:19.696591  284952 docker.go:234] disabling docker service ...
	I1209 02:35:19.696705  284952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:35:19.724140  284952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:35:19.740195  284952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:35:19.858607  284952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:35:19.971389  284952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:35:19.986810  284952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:35:20.008306  284952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:35:20.008371  284952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:20.022928  284952 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:35:20.022994  284952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:20.034830  284952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:20.045831  284952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:20.059025  284952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:35:20.069996  284952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:20.082055  284952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:20.099439  284952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:35:20.109594  284952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:35:20.117771  284952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:35:20.125854  284952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:35:20.217142  284952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:35:20.476303  284952 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:35:20.476385  284952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:35:20.481078  284952 start.go:564] Will wait 60s for crictl version
	I1209 02:35:20.481158  284952 ssh_runner.go:195] Run: which crictl
	I1209 02:35:20.485196  284952 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:35:20.513748  284952 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:35:20.513832  284952 ssh_runner.go:195] Run: crio --version
	I1209 02:35:20.550260  284952 ssh_runner.go:195] Run: crio --version
	I1209 02:35:20.599113  284952 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 02:35:20.600546  284952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-512414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:35:20.627662  284952 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 02:35:20.633701  284952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:35:20.649875  284952 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-512414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-512414 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:35:20.650014  284952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:35:20.650063  284952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:35:20.707916  284952 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:35:20.708028  284952 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:35:20.708122  284952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:35:20.746689  284952 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:35:20.746715  284952 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:35:20.746724  284952 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1209 02:35:20.746836  284952 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-512414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-512414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:35:20.746932  284952 ssh_runner.go:195] Run: crio config
	I1209 02:35:20.822122  284952 cni.go:84] Creating CNI manager for ""
	I1209 02:35:20.822165  284952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:35:20.822205  284952 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:35:20.822251  284952 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-512414 NodeName:default-k8s-diff-port-512414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:35:20.822573  284952 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-512414"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:35:20.822676  284952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 02:35:20.837419  284952 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:35:20.837489  284952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:35:20.846556  284952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1209 02:35:20.860839  284952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:35:20.885105  284952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1209 02:35:20.899292  284952 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:35:20.903474  284952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:35:20.915139  284952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:35:21.029048  284952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:35:21.062042  284952 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414 for IP: 192.168.76.2
	I1209 02:35:21.062069  284952 certs.go:195] generating shared ca certs ...
	I1209 02:35:21.062091  284952 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:21.062264  284952 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:35:21.062324  284952 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:35:21.062338  284952 certs.go:257] generating profile certs ...
	I1209 02:35:21.062409  284952 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/client.key
	I1209 02:35:21.062429  284952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/client.crt with IP's: []
	I1209 02:35:21.170456  284952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/client.crt ...
	I1209 02:35:21.170482  284952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/client.crt: {Name:mk706b3d53e0601eed59656bd21a61e8e141e07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:21.170624  284952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/client.key ...
	I1209 02:35:21.170652  284952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/client.key: {Name:mk96c0b4f08381ff9965583355a31e81821282b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:21.170749  284952 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.key.907630c7
	I1209 02:35:21.170770  284952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.crt.907630c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1209 02:35:21.268285  284952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.crt.907630c7 ...
	I1209 02:35:21.268315  284952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.crt.907630c7: {Name:mk82a097d328aef0b590bec9b6ae7c14a33c6dc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:21.268467  284952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.key.907630c7 ...
	I1209 02:35:21.268489  284952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.key.907630c7: {Name:mk235863d0ea68591a0022040ded17fb08625535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:21.268612  284952 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.crt.907630c7 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.crt
	I1209 02:35:21.268749  284952 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.key.907630c7 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.key
	I1209 02:35:21.268830  284952 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.key
	I1209 02:35:21.268860  284952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.crt with IP's: []
	I1209 02:35:21.441052  284952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.crt ...
	I1209 02:35:21.441086  284952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.crt: {Name:mk6b39cadc6145f5fffcd5ba32eceddb35c9599c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:21.441267  284952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.key ...
	I1209 02:35:21.441288  284952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.key: {Name:mk41289c58d106bcf9a4cb669094cf4972526240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:21.441550  284952 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:35:21.441603  284952 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:35:21.441624  284952 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:35:21.441678  284952 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:35:21.441717  284952 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:35:21.441750  284952 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:35:21.441809  284952 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:35:21.442479  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:35:21.461068  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:35:21.477977  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:35:21.495301  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:35:21.512899  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 02:35:21.529618  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:35:21.547015  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:35:21.563900  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:35:21.580955  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:35:21.600906  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:35:21.618076  284952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:35:21.634520  284952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:35:21.646587  284952 ssh_runner.go:195] Run: openssl version
	I1209 02:35:21.652356  284952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:21.659266  284952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:35:21.666320  284952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:21.669705  284952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:21.669736  284952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:21.703979  284952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:35:21.711040  284952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 02:35:21.717936  284952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:35:21.724714  284952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:35:21.731743  284952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:35:21.735282  284952 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:35:21.735329  284952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:35:21.769493  284952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:35:21.776500  284952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/51391683.0
	I1209 02:35:21.783349  284952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:35:21.790365  284952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:35:21.797250  284952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:35:21.800616  284952 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:35:21.800679  284952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:35:21.834367  284952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:35:21.841306  284952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145522.pem /etc/ssl/certs/3ec20f2e.0
	I1209 02:35:21.848169  284952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:35:21.851334  284952 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 02:35:21.851395  284952 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-512414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-512414 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:35:21.851457  284952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:35:21.851489  284952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:35:21.876928  284952 cri.go:89] found id: ""
	I1209 02:35:21.876980  284952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:35:21.884108  284952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 02:35:21.891331  284952 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 02:35:21.891380  284952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 02:35:21.898937  284952 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 02:35:21.898952  284952 kubeadm.go:158] found existing configuration files:
	
	I1209 02:35:21.898990  284952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 02:35:21.906187  284952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 02:35:21.906230  284952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 02:35:21.913163  284952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 02:35:21.920459  284952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 02:35:21.920511  284952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 02:35:21.928558  284952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 02:35:21.936106  284952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 02:35:21.936155  284952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 02:35:21.943523  284952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 02:35:21.951802  284952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 02:35:21.951853  284952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 02:35:21.959484  284952 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 02:35:22.018220  284952 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1209 02:35:24.462236  281066 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1209 02:35:24.462314  281066 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 02:35:24.462462  281066 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 02:35:24.462585  281066 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 02:35:24.462675  281066 kubeadm.go:319] OS: Linux
	I1209 02:35:24.462755  281066 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 02:35:24.462829  281066 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 02:35:24.462908  281066 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 02:35:24.462994  281066 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 02:35:24.463051  281066 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 02:35:24.463120  281066 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 02:35:24.463189  281066 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 02:35:24.463249  281066 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 02:35:24.463355  281066 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 02:35:24.463502  281066 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 02:35:24.463662  281066 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 02:35:24.463764  281066 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 02:35:24.464828  281066 out.go:252]   - Generating certificates and keys ...
	I1209 02:35:24.464935  281066 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 02:35:24.465026  281066 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 02:35:24.465111  281066 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 02:35:24.465198  281066 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 02:35:24.465273  281066 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 02:35:24.465334  281066 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 02:35:24.465397  281066 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 02:35:24.465566  281066 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-126117] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1209 02:35:24.465685  281066 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 02:35:24.465908  281066 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-126117] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1209 02:35:24.466020  281066 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 02:35:24.466131  281066 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 02:35:24.466204  281066 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 02:35:24.466288  281066 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 02:35:24.466359  281066 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 02:35:24.466429  281066 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 02:35:24.466517  281066 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 02:35:24.466620  281066 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 02:35:24.466751  281066 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 02:35:24.466814  281066 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 02:35:24.468019  281066 out.go:252]   - Booting up control plane ...
	I1209 02:35:24.468141  281066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 02:35:24.468231  281066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 02:35:24.468318  281066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 02:35:24.468481  281066 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 02:35:24.468613  281066 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 02:35:24.468683  281066 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 02:35:24.468922  281066 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 02:35:24.469039  281066 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002207 seconds
	I1209 02:35:24.469208  281066 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 02:35:24.469384  281066 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 02:35:24.469466  281066 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 02:35:24.469770  281066 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-126117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 02:35:24.469863  281066 kubeadm.go:319] [bootstrap-token] Using token: w4rn8p.k9ajoe5n83sedwba
	I1209 02:35:24.470875  281066 out.go:252]   - Configuring RBAC rules ...
	I1209 02:35:24.471007  281066 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 02:35:24.471110  281066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 02:35:24.471299  281066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 02:35:24.471449  281066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 02:35:24.471590  281066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 02:35:24.471722  281066 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 02:35:24.471892  281066 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 02:35:24.471949  281066 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 02:35:24.472005  281066 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 02:35:24.472014  281066 kubeadm.go:319] 
	I1209 02:35:24.472086  281066 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 02:35:24.472101  281066 kubeadm.go:319] 
	I1209 02:35:24.472194  281066 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 02:35:24.472201  281066 kubeadm.go:319] 
	I1209 02:35:24.472232  281066 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 02:35:24.472309  281066 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 02:35:24.472381  281066 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 02:35:24.472389  281066 kubeadm.go:319] 
	I1209 02:35:24.472461  281066 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 02:35:24.472471  281066 kubeadm.go:319] 
	I1209 02:35:24.472537  281066 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 02:35:24.472546  281066 kubeadm.go:319] 
	I1209 02:35:24.472616  281066 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 02:35:24.472738  281066 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 02:35:24.472873  281066 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 02:35:24.472883  281066 kubeadm.go:319] 
	I1209 02:35:24.473017  281066 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 02:35:24.473131  281066 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 02:35:24.473140  281066 kubeadm.go:319] 
	I1209 02:35:24.473240  281066 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w4rn8p.k9ajoe5n83sedwba \
	I1209 02:35:24.473358  281066 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf \
	I1209 02:35:24.473382  281066 kubeadm.go:319] 	--control-plane 
	I1209 02:35:24.473387  281066 kubeadm.go:319] 
	I1209 02:35:24.473488  281066 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 02:35:24.473513  281066 kubeadm.go:319] 
	I1209 02:35:24.473674  281066 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w4rn8p.k9ajoe5n83sedwba \
	I1209 02:35:24.473825  281066 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf 
	I1209 02:35:24.473849  281066 cni.go:84] Creating CNI manager for ""
	I1209 02:35:24.473857  281066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:35:24.475687  281066 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1209 02:35:19.981350  282749 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.335051633s)
	I1209 02:35:19.981396  282749 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 02:35:19.981457  282749 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.335249043s)
	I1209 02:35:19.981484  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 02:35:19.981484  282749 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1209 02:35:19.981511  282749 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1209 02:35:19.981551  282749 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1209 02:35:19.986138  282749 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1209 02:35:19.986168  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1209 02:35:21.555090  282749 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.573519823s)
	I1209 02:35:21.555114  282749 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1209 02:35:21.555135  282749 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1209 02:35:21.555173  282749 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1209 02:35:22.711926  282749 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.15672806s)
	I1209 02:35:22.711959  282749 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1209 02:35:22.711985  282749 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1209 02:35:22.712039  282749 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1209 02:35:23.908428  282749 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.196360036s)
	I1209 02:35:23.908457  282749 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1209 02:35:23.908482  282749 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1209 02:35:23.908527  282749 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1209 02:35:22.085298  284952 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 02:35:24.477449  281066 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 02:35:24.482267  281066 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1209 02:35:24.482285  281066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 02:35:24.495014  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 02:35:25.211886  281066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 02:35:25.211959  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:25.211986  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-126117 minikube.k8s.io/updated_at=2025_12_09T02_35_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=old-k8s-version-126117 minikube.k8s.io/primary=true
	I1209 02:35:25.305391  281066 ops.go:34] apiserver oom_adj: -16
	I1209 02:35:25.305499  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:25.806423  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:26.305766  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:26.805798  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:27.305734  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:27.805625  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:25.214851  282749 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.306295202s)
	I1209 02:35:25.214899  282749 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1209 02:35:25.214933  282749 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1209 02:35:25.215009  282749 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1209 02:35:26.672097  282749 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.457042228s)
	I1209 02:35:26.672123  282749 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1209 02:35:26.672146  282749 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 02:35:26.672197  282749 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1209 02:35:27.237866  282749 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 02:35:27.237906  282749 cache_images.go:125] Successfully loaded all cached images
	I1209 02:35:27.237912  282749 cache_images.go:94] duration metric: took 9.74575927s to LoadCachedImages
	I1209 02:35:27.237926  282749 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1209 02:35:27.238032  282749 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-185074 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-185074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:35:27.238114  282749 ssh_runner.go:195] Run: crio config
	I1209 02:35:27.285629  282749 cni.go:84] Creating CNI manager for ""
	I1209 02:35:27.285667  282749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:35:27.285688  282749 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:35:27.285710  282749 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-185074 NodeName:no-preload-185074 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:35:27.285840  282749 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-185074"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:35:27.285906  282749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 02:35:27.294036  282749 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1209 02:35:27.294079  282749 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 02:35:27.301781  282749 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1209 02:35:27.301787  282749 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1209 02:35:27.301877  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1209 02:35:27.301781  282749 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1209 02:35:27.305598  282749 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1209 02:35:27.305625  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1209 02:35:28.300591  282749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:35:28.315137  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1209 02:35:28.319289  282749 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1209 02:35:28.319321  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1209 02:35:28.565514  282749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1209 02:35:28.569819  282749 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1209 02:35:28.569860  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1209 02:35:28.799999  282749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:35:28.807925  282749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1209 02:35:28.820936  282749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 02:35:29.104490  282749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1209 02:35:29.117614  282749 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:35:29.121734  282749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:35:29.229062  282749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:35:29.307704  282749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:35:29.328254  282749 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074 for IP: 192.168.103.2
	I1209 02:35:29.328279  282749 certs.go:195] generating shared ca certs ...
	I1209 02:35:29.328298  282749 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:29.328471  282749 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:35:29.328524  282749 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:35:29.328537  282749 certs.go:257] generating profile certs ...
	I1209 02:35:29.328608  282749 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/client.key
	I1209 02:35:29.328625  282749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/client.crt with IP's: []
	I1209 02:35:29.501269  282749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/client.crt ...
	I1209 02:35:29.501295  282749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/client.crt: {Name:mk33443e244c3924f67886e7d573d2f3539a1043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:29.501476  282749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/client.key ...
	I1209 02:35:29.501494  282749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/client.key: {Name:mk45f49a38a1462d8f3eb3c8c58d656292163fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:29.501604  282749 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.key.65c20ace
	I1209 02:35:29.501628  282749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.crt.65c20ace with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1209 02:35:29.649418  282749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.crt.65c20ace ...
	I1209 02:35:29.649446  282749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.crt.65c20ace: {Name:mk5978d469b5fb1eab5cf08d20bba38c679ecd8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:29.649607  282749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.key.65c20ace ...
	I1209 02:35:29.649628  282749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.key.65c20ace: {Name:mk10f746b4f4768f9d9086e477050b91bfd48ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:29.649752  282749 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.crt.65c20ace -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.crt
	I1209 02:35:29.649848  282749 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.key.65c20ace -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.key
	I1209 02:35:29.649935  282749 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/proxy-client.key
	I1209 02:35:29.649954  282749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/proxy-client.crt with IP's: []
	I1209 02:35:29.771825  282749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/proxy-client.crt ...
	I1209 02:35:29.771850  282749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/proxy-client.crt: {Name:mk66bf1daf50cf253f3c298d6c683d2ccc043e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:29.793780  282749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/proxy-client.key ...
	I1209 02:35:29.793826  282749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/proxy-client.key: {Name:mka3d9c23ac5116977a9bf224208a2df01f7cb68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:29.794101  282749 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:35:29.794159  282749 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:35:29.794174  282749 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:35:29.794208  282749 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:35:29.794240  282749 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:35:29.794271  282749 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:35:29.794324  282749 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:35:29.795132  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:35:29.813517  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:35:29.831247  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:35:29.852823  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:35:29.875055  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 02:35:29.896988  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 02:35:29.915759  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:35:29.938727  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:35:29.961917  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:35:29.984486  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:35:30.003281  282749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:35:30.020256  282749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:35:30.032456  282749 ssh_runner.go:195] Run: openssl version
	I1209 02:35:30.038654  282749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:35:30.047917  282749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:35:30.058189  282749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:35:30.062331  282749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:35:30.062387  282749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:35:30.100974  282749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:35:30.108688  282749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/51391683.0
	I1209 02:35:30.116249  282749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:35:30.123518  282749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:35:30.130755  282749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:35:30.134629  282749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:35:30.134700  282749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:35:30.170485  282749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:35:30.179396  282749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145522.pem /etc/ssl/certs/3ec20f2e.0
	I1209 02:35:30.188198  282749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:30.195963  282749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:35:30.204074  282749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:30.208032  282749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:30.208087  282749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:35:30.245951  282749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:35:30.253209  282749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 02:35:30.260897  282749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:35:30.264547  282749 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 02:35:30.264601  282749 kubeadm.go:401] StartCluster: {Name:no-preload-185074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-185074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:35:30.264694  282749 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:35:30.264737  282749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:35:30.291119  282749 cri.go:89] found id: ""
	I1209 02:35:30.291179  282749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:35:30.298663  282749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 02:35:30.306425  282749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 02:35:30.306472  282749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 02:35:30.314219  282749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 02:35:30.314238  282749 kubeadm.go:158] found existing configuration files:
	
	I1209 02:35:30.314278  282749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 02:35:30.322477  282749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 02:35:30.322570  282749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 02:35:30.330343  282749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 02:35:30.338201  282749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 02:35:30.338247  282749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 02:35:30.345626  282749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 02:35:30.353510  282749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 02:35:30.353551  282749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 02:35:30.363208  282749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 02:35:30.371733  282749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 02:35:30.371779  282749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 02:35:30.379900  282749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 02:35:30.417224  282749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 02:35:30.417299  282749 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 02:35:30.485882  282749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 02:35:30.485986  282749 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 02:35:30.486044  282749 kubeadm.go:319] OS: Linux
	I1209 02:35:30.486111  282749 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 02:35:30.486199  282749 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 02:35:30.486283  282749 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 02:35:30.486345  282749 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 02:35:30.486409  282749 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 02:35:30.486476  282749 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 02:35:30.486541  282749 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 02:35:30.486599  282749 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 02:35:30.552749  282749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 02:35:30.552887  282749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 02:35:30.553010  282749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 02:35:30.569126  282749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 02:35:30.571388  282749 out.go:252]   - Generating certificates and keys ...
	I1209 02:35:30.571496  282749 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 02:35:30.571600  282749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 02:35:30.870985  282749 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 02:35:30.946970  282749 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 02:35:31.102887  282749 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 02:35:31.288305  282749 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 02:35:31.363615  282749 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 02:35:31.363879  282749 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-185074] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1209 02:35:31.438421  282749 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 02:35:31.438572  282749 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-185074] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1209 02:35:31.595976  282749 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 02:35:31.650450  282749 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 02:35:31.759955  282749 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 02:35:31.760281  282749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 02:35:31.897816  282749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 02:35:31.988952  282749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 02:35:32.067627  282749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 02:35:32.284878  282749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 02:35:32.334163  282749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 02:35:32.334724  282749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 02:35:32.338457  282749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 02:35:28.306124  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:28.806346  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:29.306201  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:29.805586  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:30.306413  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:30.805688  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:31.305737  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:31.806315  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:32.306341  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:32.805855  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:34.559555  284952 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 02:35:34.559683  284952 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 02:35:34.559804  284952 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 02:35:34.559899  284952 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 02:35:34.559947  284952 kubeadm.go:319] OS: Linux
	I1209 02:35:34.560022  284952 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 02:35:34.560091  284952 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 02:35:34.560176  284952 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 02:35:34.560265  284952 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 02:35:34.560343  284952 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 02:35:34.560408  284952 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 02:35:34.560474  284952 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 02:35:34.560557  284952 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 02:35:34.560688  284952 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 02:35:34.560850  284952 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 02:35:34.561003  284952 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 02:35:34.561093  284952 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 02:35:34.563373  284952 out.go:252]   - Generating certificates and keys ...
	I1209 02:35:34.563436  284952 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 02:35:34.563493  284952 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 02:35:34.563552  284952 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 02:35:34.563598  284952 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 02:35:34.563665  284952 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 02:35:34.563714  284952 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 02:35:34.563762  284952 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 02:35:34.563866  284952 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-512414 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 02:35:34.563909  284952 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 02:35:34.564029  284952 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-512414 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 02:35:34.564101  284952 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 02:35:34.564191  284952 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 02:35:34.564255  284952 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 02:35:34.564312  284952 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 02:35:34.564356  284952 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 02:35:34.564401  284952 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 02:35:34.564443  284952 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 02:35:34.564498  284952 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 02:35:34.564557  284952 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 02:35:34.564676  284952 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 02:35:34.564804  284952 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 02:35:34.565972  284952 out.go:252]   - Booting up control plane ...
	I1209 02:35:34.566102  284952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 02:35:34.566224  284952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 02:35:34.566317  284952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 02:35:34.566548  284952 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 02:35:34.566674  284952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 02:35:34.566807  284952 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 02:35:34.566936  284952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 02:35:34.566993  284952 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 02:35:34.567174  284952 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 02:35:34.567345  284952 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 02:35:34.567440  284952 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001775384s
	I1209 02:35:34.567564  284952 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 02:35:34.567700  284952 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1209 02:35:34.567837  284952 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 02:35:34.567949  284952 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 02:35:34.568065  284952 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.110102417s
	I1209 02:35:34.568169  284952 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.448926671s
	I1209 02:35:34.568266  284952 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.00129866s
	I1209 02:35:34.568426  284952 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 02:35:34.568624  284952 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 02:35:34.568726  284952 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 02:35:34.568976  284952 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-512414 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 02:35:34.569050  284952 kubeadm.go:319] [bootstrap-token] Using token: 6zvlni.0ik3i8jvj5ra5fdh
	I1209 02:35:34.570427  284952 out.go:252]   - Configuring RBAC rules ...
	I1209 02:35:34.570562  284952 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 02:35:34.570715  284952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 02:35:34.570947  284952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 02:35:34.571134  284952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 02:35:34.571314  284952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 02:35:34.571455  284952 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 02:35:34.571614  284952 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 02:35:34.571702  284952 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 02:35:34.571777  284952 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 02:35:34.571792  284952 kubeadm.go:319] 
	I1209 02:35:34.571894  284952 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 02:35:34.571904  284952 kubeadm.go:319] 
	I1209 02:35:34.572017  284952 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 02:35:34.572031  284952 kubeadm.go:319] 
	I1209 02:35:34.572072  284952 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 02:35:34.572157  284952 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 02:35:34.572229  284952 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 02:35:34.572237  284952 kubeadm.go:319] 
	I1209 02:35:34.572298  284952 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 02:35:34.572309  284952 kubeadm.go:319] 
	I1209 02:35:34.572365  284952 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 02:35:34.572371  284952 kubeadm.go:319] 
	I1209 02:35:34.572439  284952 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 02:35:34.572542  284952 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 02:35:34.572601  284952 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 02:35:34.572604  284952 kubeadm.go:319] 
	I1209 02:35:34.572737  284952 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 02:35:34.572867  284952 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 02:35:34.572880  284952 kubeadm.go:319] 
	I1209 02:35:34.572996  284952 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 6zvlni.0ik3i8jvj5ra5fdh \
	I1209 02:35:34.573123  284952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf \
	I1209 02:35:34.573150  284952 kubeadm.go:319] 	--control-plane 
	I1209 02:35:34.573155  284952 kubeadm.go:319] 
	I1209 02:35:34.573293  284952 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 02:35:34.573309  284952 kubeadm.go:319] 
	I1209 02:35:34.573447  284952 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 6zvlni.0ik3i8jvj5ra5fdh \
	I1209 02:35:34.573621  284952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf 
	I1209 02:35:34.573690  284952 cni.go:84] Creating CNI manager for ""
	I1209 02:35:34.573703  284952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:35:34.576071  284952 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1209 02:35:32.339831  282749 out.go:252]   - Booting up control plane ...
	I1209 02:35:32.339969  282749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 02:35:32.340097  282749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 02:35:32.340592  282749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 02:35:32.372050  282749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 02:35:32.372216  282749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 02:35:32.380966  282749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 02:35:32.381291  282749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 02:35:32.381404  282749 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 02:35:32.482558  282749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 02:35:32.482711  282749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 02:35:33.483982  282749 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001464846s
	I1209 02:35:33.488424  282749 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 02:35:33.488582  282749 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1209 02:35:33.488749  282749 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 02:35:33.488878  282749 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 02:35:33.993112  282749 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 504.739246ms
	I1209 02:35:35.422967  282749 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.934546822s
	I1209 02:35:36.990109  282749 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501572982s
	I1209 02:35:37.008840  282749 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 02:35:37.017695  282749 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 02:35:37.027967  282749 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 02:35:37.028870  282749 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-185074 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 02:35:37.036951  282749 kubeadm.go:319] [bootstrap-token] Using token: vht8dz.hvm1wy8cty9p32wz
	I1209 02:35:34.577346  284952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 02:35:34.582583  284952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1209 02:35:34.582601  284952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 02:35:34.596790  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 02:35:34.831077  284952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 02:35:34.831331  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:34.831436  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-512414 minikube.k8s.io/updated_at=2025_12_09T02_35_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=default-k8s-diff-port-512414 minikube.k8s.io/primary=true
	I1209 02:35:34.842602  284952 ops.go:34] apiserver oom_adj: -16
	I1209 02:35:34.932766  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:35.433132  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:35.933441  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:36.432956  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:36.932914  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:33.305549  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:33.805974  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:34.305962  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:34.806441  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:35.305927  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:35.806056  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:36.305729  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:36.805631  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:37.305768  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:37.806583  281066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:37.878756  281066 kubeadm.go:1114] duration metric: took 12.666854076s to wait for elevateKubeSystemPrivileges
	I1209 02:35:37.878794  281066 kubeadm.go:403] duration metric: took 23.634769402s to StartCluster
	I1209 02:35:37.878816  281066 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:37.878897  281066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:35:37.879966  281066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:37.880267  281066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 02:35:37.880273  281066 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:35:37.880450  281066 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 02:35:37.880678  281066 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-126117"
	I1209 02:35:37.880704  281066 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-126117"
	I1209 02:35:37.880536  281066 config.go:182] Loaded profile config "old-k8s-version-126117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1209 02:35:37.880736  281066 host.go:66] Checking if "old-k8s-version-126117" exists ...
	I1209 02:35:37.880765  281066 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-126117"
	I1209 02:35:37.880788  281066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-126117"
	I1209 02:35:37.881202  281066 cli_runner.go:164] Run: docker container inspect old-k8s-version-126117 --format={{.State.Status}}
	I1209 02:35:37.881374  281066 cli_runner.go:164] Run: docker container inspect old-k8s-version-126117 --format={{.State.Status}}
	I1209 02:35:37.883077  281066 out.go:179] * Verifying Kubernetes components...
	I1209 02:35:37.884805  281066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:35:37.910504  281066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:35:37.038371  282749 out.go:252]   - Configuring RBAC rules ...
	I1209 02:35:37.038527  282749 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 02:35:37.041193  282749 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 02:35:37.046650  282749 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 02:35:37.048922  282749 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 02:35:37.051170  282749 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 02:35:37.053490  282749 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 02:35:37.396043  282749 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 02:35:37.812241  282749 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 02:35:38.397202  282749 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 02:35:38.398236  282749 kubeadm.go:319] 
	I1209 02:35:38.398323  282749 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 02:35:38.398347  282749 kubeadm.go:319] 
	I1209 02:35:38.398507  282749 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 02:35:38.398526  282749 kubeadm.go:319] 
	I1209 02:35:38.398561  282749 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 02:35:38.398657  282749 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 02:35:38.398735  282749 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 02:35:38.398743  282749 kubeadm.go:319] 
	I1209 02:35:38.398841  282749 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 02:35:38.398852  282749 kubeadm.go:319] 
	I1209 02:35:38.398959  282749 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 02:35:38.398971  282749 kubeadm.go:319] 
	I1209 02:35:38.399045  282749 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 02:35:38.399154  282749 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 02:35:38.399242  282749 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 02:35:38.399269  282749 kubeadm.go:319] 
	I1209 02:35:38.399409  282749 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 02:35:38.399533  282749 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 02:35:38.399545  282749 kubeadm.go:319] 
	I1209 02:35:38.399660  282749 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vht8dz.hvm1wy8cty9p32wz \
	I1209 02:35:38.399805  282749 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf \
	I1209 02:35:38.399829  282749 kubeadm.go:319] 	--control-plane 
	I1209 02:35:38.399834  282749 kubeadm.go:319] 
	I1209 02:35:38.399923  282749 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 02:35:38.399928  282749 kubeadm.go:319] 
	I1209 02:35:38.400020  282749 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vht8dz.hvm1wy8cty9p32wz \
	I1209 02:35:38.400136  282749 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf 
	I1209 02:35:38.402719  282749 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1209 02:35:38.402884  282749 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 02:35:38.402915  282749 cni.go:84] Creating CNI manager for ""
	I1209 02:35:38.402925  282749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:35:38.405104  282749 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1209 02:35:37.911805  281066 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-126117"
	I1209 02:35:37.911859  281066 host.go:66] Checking if "old-k8s-version-126117" exists ...
	I1209 02:35:37.911812  281066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:35:37.912039  281066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 02:35:37.912116  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:37.912343  281066 cli_runner.go:164] Run: docker container inspect old-k8s-version-126117 --format={{.State.Status}}
	I1209 02:35:37.940814  281066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:35:37.948903  281066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 02:35:37.948995  281066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 02:35:37.949087  281066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:35:37.972911  281066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:35:37.991145  281066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 02:35:38.045668  281066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:35:38.058073  281066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:35:38.090583  281066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 02:35:38.242983  281066 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1209 02:35:38.243894  281066 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-126117" to be "Ready" ...
	I1209 02:35:38.454990  281066 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1209 02:35:38.406148  282749 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 02:35:38.411524  282749 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1209 02:35:38.411541  282749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 02:35:38.428146  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 02:35:38.665823  282749 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 02:35:38.665921  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:38.665985  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-185074 minikube.k8s.io/updated_at=2025_12_09T02_35_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=no-preload-185074 minikube.k8s.io/primary=true
	I1209 02:35:38.676387  282749 ops.go:34] apiserver oom_adj: -16
	I1209 02:35:38.751040  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:39.251455  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:37.433127  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:37.933856  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:38.433725  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:38.933850  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:39.433504  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:39.933877  284952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:40.003500  284952 kubeadm.go:1114] duration metric: took 5.172307475s to wait for elevateKubeSystemPrivileges
	I1209 02:35:40.003533  284952 kubeadm.go:403] duration metric: took 18.152142944s to StartCluster
	I1209 02:35:40.003553  284952 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:40.003624  284952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:35:40.005147  284952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:40.005361  284952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 02:35:40.005360  284952 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:35:40.005444  284952 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 02:35:40.005523  284952 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-512414"
	I1209 02:35:40.005545  284952 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-512414"
	I1209 02:35:40.005558  284952 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-512414"
	I1209 02:35:40.005575  284952 config.go:182] Loaded profile config "default-k8s-diff-port-512414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:35:40.005580  284952 host.go:66] Checking if "default-k8s-diff-port-512414" exists ...
	I1209 02:35:40.005591  284952 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-512414"
	I1209 02:35:40.005983  284952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:35:40.006133  284952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:35:40.006854  284952 out.go:179] * Verifying Kubernetes components...
	I1209 02:35:40.008298  284952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:35:40.030745  284952 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-512414"
	I1209 02:35:40.030791  284952 host.go:66] Checking if "default-k8s-diff-port-512414" exists ...
	I1209 02:35:40.031269  284952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:35:40.031900  284952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:35:40.033963  284952 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:35:40.033986  284952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 02:35:40.034035  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:40.057860  284952 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 02:35:40.057929  284952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 02:35:40.058258  284952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:35:40.062547  284952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa Username:docker}
	I1209 02:35:40.085554  284952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa Username:docker}
	I1209 02:35:40.100348  284952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 02:35:40.169430  284952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:35:40.177082  284952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:35:40.195907  284952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 02:35:40.282719  284952 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1209 02:35:40.284351  284952 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-512414" to be "Ready" ...
	I1209 02:35:40.500434  284952 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1209 02:35:40.501536  284952 addons.go:530] duration metric: took 496.094233ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 02:35:40.787095  284952 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-512414" context rescaled to 1 replicas
	I1209 02:35:38.456194  281066 addons.go:530] duration metric: took 575.73962ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 02:35:38.747576  281066 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-126117" context rescaled to 1 replicas
	W1209 02:35:40.247292  281066 node_ready.go:57] node "old-k8s-version-126117" has "Ready":"False" status (will retry)
	W1209 02:35:42.746963  281066 node_ready.go:57] node "old-k8s-version-126117" has "Ready":"False" status (will retry)
	I1209 02:35:39.751568  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:40.251089  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:40.751266  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:41.251397  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:41.751978  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:42.251790  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:42.751833  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:43.251365  282749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:35:43.429063  282749 kubeadm.go:1114] duration metric: took 4.763235735s to wait for elevateKubeSystemPrivileges
	I1209 02:35:43.429107  282749 kubeadm.go:403] duration metric: took 13.164510119s to StartCluster
	I1209 02:35:43.429128  282749 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:43.429207  282749 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:35:43.431306  282749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:35:43.431582  282749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 02:35:43.431573  282749 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:35:43.431604  282749 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 02:35:43.431729  282749 addons.go:70] Setting storage-provisioner=true in profile "no-preload-185074"
	I1209 02:35:43.431747  282749 addons.go:239] Setting addon storage-provisioner=true in "no-preload-185074"
	I1209 02:35:43.431775  282749 host.go:66] Checking if "no-preload-185074" exists ...
	I1209 02:35:43.431789  282749 addons.go:70] Setting default-storageclass=true in profile "no-preload-185074"
	I1209 02:35:43.431808  282749 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-185074"
	I1209 02:35:43.431851  282749 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:35:43.432153  282749 cli_runner.go:164] Run: docker container inspect no-preload-185074 --format={{.State.Status}}
	I1209 02:35:43.432286  282749 cli_runner.go:164] Run: docker container inspect no-preload-185074 --format={{.State.Status}}
	I1209 02:35:43.432888  282749 out.go:179] * Verifying Kubernetes components...
	I1209 02:35:43.434501  282749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:35:43.459649  282749 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:35:43.459658  282749 addons.go:239] Setting addon default-storageclass=true in "no-preload-185074"
	I1209 02:35:43.459818  282749 host.go:66] Checking if "no-preload-185074" exists ...
	I1209 02:35:43.460273  282749 cli_runner.go:164] Run: docker container inspect no-preload-185074 --format={{.State.Status}}
	I1209 02:35:43.461188  282749 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:35:43.461206  282749 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 02:35:43.461266  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:43.495948  282749 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 02:35:43.495971  282749 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 02:35:43.496060  282749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:35:43.497630  282749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/no-preload-185074/id_rsa Username:docker}
	I1209 02:35:43.521057  282749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/no-preload-185074/id_rsa Username:docker}
	I1209 02:35:43.537146  282749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 02:35:43.585204  282749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:35:43.609954  282749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:35:43.628597  282749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 02:35:43.709231  282749 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1209 02:35:43.710822  282749 node_ready.go:35] waiting up to 6m0s for node "no-preload-185074" to be "Ready" ...
	I1209 02:35:43.920732  282749 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1209 02:35:43.921873  282749 addons.go:530] duration metric: took 490.270272ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 02:35:44.212972  282749 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-185074" context rescaled to 1 replicas
	W1209 02:35:42.288054  284952 node_ready.go:57] node "default-k8s-diff-port-512414" has "Ready":"False" status (will retry)
	W1209 02:35:44.288177  284952 node_ready.go:57] node "default-k8s-diff-port-512414" has "Ready":"False" status (will retry)
	W1209 02:35:46.787583  284952 node_ready.go:57] node "default-k8s-diff-port-512414" has "Ready":"False" status (will retry)
	W1209 02:35:44.748320  281066 node_ready.go:57] node "old-k8s-version-126117" has "Ready":"False" status (will retry)
	W1209 02:35:47.247290  281066 node_ready.go:57] node "old-k8s-version-126117" has "Ready":"False" status (will retry)
	W1209 02:35:45.714446  282749 node_ready.go:57] node "no-preload-185074" has "Ready":"False" status (will retry)
	W1209 02:35:48.213968  282749 node_ready.go:57] node "no-preload-185074" has "Ready":"False" status (will retry)
	W1209 02:35:48.787786  284952 node_ready.go:57] node "default-k8s-diff-port-512414" has "Ready":"False" status (will retry)
	I1209 02:35:50.787628  284952 node_ready.go:49] node "default-k8s-diff-port-512414" is "Ready"
	I1209 02:35:50.787686  284952 node_ready.go:38] duration metric: took 10.503302874s for node "default-k8s-diff-port-512414" to be "Ready" ...
	I1209 02:35:50.787701  284952 api_server.go:52] waiting for apiserver process to appear ...
	I1209 02:35:50.787760  284952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:35:50.800054  284952 api_server.go:72] duration metric: took 10.794669837s to wait for apiserver process to appear ...
	I1209 02:35:50.800074  284952 api_server.go:88] waiting for apiserver healthz status ...
	I1209 02:35:50.800090  284952 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1209 02:35:50.804006  284952 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1209 02:35:50.804810  284952 api_server.go:141] control plane version: v1.34.2
	I1209 02:35:50.804832  284952 api_server.go:131] duration metric: took 4.752493ms to wait for apiserver health ...
	I1209 02:35:50.804839  284952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 02:35:50.807505  284952 system_pods.go:59] 8 kube-system pods found
	I1209 02:35:50.807537  284952 system_pods.go:61] "coredns-66bc5c9577-gtkkc" [9cfbd4aa-8819-4717-b719-d53cce885003] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:35:50.807545  284952 system_pods.go:61] "etcd-default-k8s-diff-port-512414" [fc466309-b267-4f65-ad79-33bd32d50172] Running
	I1209 02:35:50.807556  284952 system_pods.go:61] "kindnet-5hz5b" [aeff075a-c1a7-49b1-b3c4-dee45cc405fe] Running
	I1209 02:35:50.807562  284952 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-512414" [c0fe2ed6-0146-40bc-8252-df3067ed36a3] Running
	I1209 02:35:50.807572  284952 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-512414" [89bf91d9-547a-4455-a191-e4f67efea237] Running
	I1209 02:35:50.807577  284952 system_pods.go:61] "kube-proxy-nkdhm" [b3cad909-51ec-4cd6-b391-d993cf9e18d5] Running
	I1209 02:35:50.807582  284952 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-512414" [c7a653d6-b865-452a-9407-efdc9079f8a6] Running
	I1209 02:35:50.807590  284952 system_pods.go:61] "storage-provisioner" [be12b3a9-68f0-4ec5-8dee-5afcf03c12ff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:35:50.807601  284952 system_pods.go:74] duration metric: took 2.755622ms to wait for pod list to return data ...
	I1209 02:35:50.807613  284952 default_sa.go:34] waiting for default service account to be created ...
	I1209 02:35:50.809737  284952 default_sa.go:45] found service account: "default"
	I1209 02:35:50.809757  284952 default_sa.go:55] duration metric: took 2.138684ms for default service account to be created ...
	I1209 02:35:50.809764  284952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 02:35:50.812089  284952 system_pods.go:86] 8 kube-system pods found
	I1209 02:35:50.812117  284952 system_pods.go:89] "coredns-66bc5c9577-gtkkc" [9cfbd4aa-8819-4717-b719-d53cce885003] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:35:50.812122  284952 system_pods.go:89] "etcd-default-k8s-diff-port-512414" [fc466309-b267-4f65-ad79-33bd32d50172] Running
	I1209 02:35:50.812128  284952 system_pods.go:89] "kindnet-5hz5b" [aeff075a-c1a7-49b1-b3c4-dee45cc405fe] Running
	I1209 02:35:50.812155  284952 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-512414" [c0fe2ed6-0146-40bc-8252-df3067ed36a3] Running
	I1209 02:35:50.812166  284952 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-512414" [89bf91d9-547a-4455-a191-e4f67efea237] Running
	I1209 02:35:50.812172  284952 system_pods.go:89] "kube-proxy-nkdhm" [b3cad909-51ec-4cd6-b391-d993cf9e18d5] Running
	I1209 02:35:50.812178  284952 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-512414" [c7a653d6-b865-452a-9407-efdc9079f8a6] Running
	I1209 02:35:50.812184  284952 system_pods.go:89] "storage-provisioner" [be12b3a9-68f0-4ec5-8dee-5afcf03c12ff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:35:50.812201  284952 retry.go:31] will retry after 229.951576ms: missing components: kube-dns
	I1209 02:35:51.045942  284952 system_pods.go:86] 8 kube-system pods found
	I1209 02:35:51.045971  284952 system_pods.go:89] "coredns-66bc5c9577-gtkkc" [9cfbd4aa-8819-4717-b719-d53cce885003] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:35:51.045977  284952 system_pods.go:89] "etcd-default-k8s-diff-port-512414" [fc466309-b267-4f65-ad79-33bd32d50172] Running
	I1209 02:35:51.045983  284952 system_pods.go:89] "kindnet-5hz5b" [aeff075a-c1a7-49b1-b3c4-dee45cc405fe] Running
	I1209 02:35:51.045987  284952 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-512414" [c0fe2ed6-0146-40bc-8252-df3067ed36a3] Running
	I1209 02:35:51.045990  284952 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-512414" [89bf91d9-547a-4455-a191-e4f67efea237] Running
	I1209 02:35:51.045993  284952 system_pods.go:89] "kube-proxy-nkdhm" [b3cad909-51ec-4cd6-b391-d993cf9e18d5] Running
	I1209 02:35:51.045998  284952 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-512414" [c7a653d6-b865-452a-9407-efdc9079f8a6] Running
	I1209 02:35:51.046001  284952 system_pods.go:89] "storage-provisioner" [be12b3a9-68f0-4ec5-8dee-5afcf03c12ff] Running
	I1209 02:35:51.046008  284952 system_pods.go:126] duration metric: took 236.239432ms to wait for k8s-apps to be running ...
	I1209 02:35:51.046019  284952 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 02:35:51.046057  284952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:35:51.058828  284952 system_svc.go:56] duration metric: took 12.803815ms WaitForService to wait for kubelet
	I1209 02:35:51.058852  284952 kubeadm.go:587] duration metric: took 11.053468929s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:35:51.058872  284952 node_conditions.go:102] verifying NodePressure condition ...
	I1209 02:35:51.061299  284952 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 02:35:51.061320  284952 node_conditions.go:123] node cpu capacity is 8
	I1209 02:35:51.061334  284952 node_conditions.go:105] duration metric: took 2.456661ms to run NodePressure ...
	I1209 02:35:51.061353  284952 start.go:242] waiting for startup goroutines ...
	I1209 02:35:51.061360  284952 start.go:247] waiting for cluster config update ...
	I1209 02:35:51.061372  284952 start.go:256] writing updated cluster config ...
	I1209 02:35:51.061595  284952 ssh_runner.go:195] Run: rm -f paused
	I1209 02:35:51.065102  284952 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:35:51.068177  284952 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gtkkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.073004  284952 pod_ready.go:94] pod "coredns-66bc5c9577-gtkkc" is "Ready"
	I1209 02:35:52.073027  284952 pod_ready.go:86] duration metric: took 1.004832339s for pod "coredns-66bc5c9577-gtkkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.075150  284952 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.078527  284952 pod_ready.go:94] pod "etcd-default-k8s-diff-port-512414" is "Ready"
	I1209 02:35:52.078544  284952 pod_ready.go:86] duration metric: took 3.373942ms for pod "etcd-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	W1209 02:35:49.747237  281066 node_ready.go:57] node "old-k8s-version-126117" has "Ready":"False" status (will retry)
	I1209 02:35:51.247547  281066 node_ready.go:49] node "old-k8s-version-126117" is "Ready"
	I1209 02:35:51.247577  281066 node_ready.go:38] duration metric: took 13.003648179s for node "old-k8s-version-126117" to be "Ready" ...
	I1209 02:35:51.247593  281066 api_server.go:52] waiting for apiserver process to appear ...
	I1209 02:35:51.247669  281066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:35:51.261497  281066 api_server.go:72] duration metric: took 13.381187638s to wait for apiserver process to appear ...
	I1209 02:35:51.261544  281066 api_server.go:88] waiting for apiserver healthz status ...
	I1209 02:35:51.261567  281066 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1209 02:35:51.266349  281066 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1209 02:35:51.267741  281066 api_server.go:141] control plane version: v1.28.0
	I1209 02:35:51.267767  281066 api_server.go:131] duration metric: took 6.213445ms to wait for apiserver health ...
	I1209 02:35:51.267776  281066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 02:35:51.271821  281066 system_pods.go:59] 8 kube-system pods found
	I1209 02:35:51.271856  281066 system_pods.go:61] "coredns-5dd5756b68-5d9gm" [337573f0-bfed-495c-b553-9c3f5f0625ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:35:51.271870  281066 system_pods.go:61] "etcd-old-k8s-version-126117" [c5e8de82-a4ba-4aff-8bce-c9bb587d4c83] Running
	I1209 02:35:51.271878  281066 system_pods.go:61] "kindnet-xk6zs" [e479d613-5da4-4db3-b0ff-799cda129c50] Running
	I1209 02:35:51.271884  281066 system_pods.go:61] "kube-apiserver-old-k8s-version-126117" [e937f568-d706-4ab9-b197-64cf39d5b180] Running
	I1209 02:35:51.271894  281066 system_pods.go:61] "kube-controller-manager-old-k8s-version-126117" [325624b2-d56e-495b-8ec8-3936a8c76684] Running
	I1209 02:35:51.271909  281066 system_pods.go:61] "kube-proxy-xjvf6" [47edaf66-8fca-4651-bdef-9d865250c8fe] Running
	I1209 02:35:51.271920  281066 system_pods.go:61] "kube-scheduler-old-k8s-version-126117" [1aaa4427-7320-49d6-bde0-f34482aee4ff] Running
	I1209 02:35:51.271928  281066 system_pods.go:61] "storage-provisioner" [beb552c5-ebdc-4c05-83a0-8236708b3afc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:35:51.271936  281066 system_pods.go:74] duration metric: took 4.152726ms to wait for pod list to return data ...
	I1209 02:35:51.271948  281066 default_sa.go:34] waiting for default service account to be created ...
	I1209 02:35:51.274790  281066 default_sa.go:45] found service account: "default"
	I1209 02:35:51.274810  281066 default_sa.go:55] duration metric: took 2.855175ms for default service account to be created ...
	I1209 02:35:51.274820  281066 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 02:35:51.280850  281066 system_pods.go:86] 8 kube-system pods found
	I1209 02:35:51.280883  281066 system_pods.go:89] "coredns-5dd5756b68-5d9gm" [337573f0-bfed-495c-b553-9c3f5f0625ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:35:51.280931  281066 system_pods.go:89] "etcd-old-k8s-version-126117" [c5e8de82-a4ba-4aff-8bce-c9bb587d4c83] Running
	I1209 02:35:51.280953  281066 system_pods.go:89] "kindnet-xk6zs" [e479d613-5da4-4db3-b0ff-799cda129c50] Running
	I1209 02:35:51.280959  281066 system_pods.go:89] "kube-apiserver-old-k8s-version-126117" [e937f568-d706-4ab9-b197-64cf39d5b180] Running
	I1209 02:35:51.280966  281066 system_pods.go:89] "kube-controller-manager-old-k8s-version-126117" [325624b2-d56e-495b-8ec8-3936a8c76684] Running
	I1209 02:35:51.280972  281066 system_pods.go:89] "kube-proxy-xjvf6" [47edaf66-8fca-4651-bdef-9d865250c8fe] Running
	I1209 02:35:51.280982  281066 system_pods.go:89] "kube-scheduler-old-k8s-version-126117" [1aaa4427-7320-49d6-bde0-f34482aee4ff] Running
	I1209 02:35:51.280993  281066 system_pods.go:89] "storage-provisioner" [beb552c5-ebdc-4c05-83a0-8236708b3afc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:35:51.281029  281066 retry.go:31] will retry after 289.816013ms: missing components: kube-dns
	I1209 02:35:51.574526  281066 system_pods.go:86] 8 kube-system pods found
	I1209 02:35:51.574561  281066 system_pods.go:89] "coredns-5dd5756b68-5d9gm" [337573f0-bfed-495c-b553-9c3f5f0625ef] Running
	I1209 02:35:51.574570  281066 system_pods.go:89] "etcd-old-k8s-version-126117" [c5e8de82-a4ba-4aff-8bce-c9bb587d4c83] Running
	I1209 02:35:51.574575  281066 system_pods.go:89] "kindnet-xk6zs" [e479d613-5da4-4db3-b0ff-799cda129c50] Running
	I1209 02:35:51.574581  281066 system_pods.go:89] "kube-apiserver-old-k8s-version-126117" [e937f568-d706-4ab9-b197-64cf39d5b180] Running
	I1209 02:35:51.574588  281066 system_pods.go:89] "kube-controller-manager-old-k8s-version-126117" [325624b2-d56e-495b-8ec8-3936a8c76684] Running
	I1209 02:35:51.574593  281066 system_pods.go:89] "kube-proxy-xjvf6" [47edaf66-8fca-4651-bdef-9d865250c8fe] Running
	I1209 02:35:51.574600  281066 system_pods.go:89] "kube-scheduler-old-k8s-version-126117" [1aaa4427-7320-49d6-bde0-f34482aee4ff] Running
	I1209 02:35:51.574609  281066 system_pods.go:89] "storage-provisioner" [beb552c5-ebdc-4c05-83a0-8236708b3afc] Running
	I1209 02:35:51.574619  281066 system_pods.go:126] duration metric: took 299.79194ms to wait for k8s-apps to be running ...
	I1209 02:35:51.574630  281066 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 02:35:51.574712  281066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:35:51.587700  281066 system_svc.go:56] duration metric: took 13.059928ms WaitForService to wait for kubelet
	I1209 02:35:51.587733  281066 kubeadm.go:587] duration metric: took 13.707427707s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:35:51.587757  281066 node_conditions.go:102] verifying NodePressure condition ...
	I1209 02:35:51.590002  281066 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 02:35:51.590025  281066 node_conditions.go:123] node cpu capacity is 8
	I1209 02:35:51.590040  281066 node_conditions.go:105] duration metric: took 2.277995ms to run NodePressure ...
	I1209 02:35:51.590050  281066 start.go:242] waiting for startup goroutines ...
	I1209 02:35:51.590057  281066 start.go:247] waiting for cluster config update ...
	I1209 02:35:51.590066  281066 start.go:256] writing updated cluster config ...
	I1209 02:35:51.590295  281066 ssh_runner.go:195] Run: rm -f paused
	I1209 02:35:51.594057  281066 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:35:51.597711  281066 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5d9gm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:51.601440  281066 pod_ready.go:94] pod "coredns-5dd5756b68-5d9gm" is "Ready"
	I1209 02:35:51.601461  281066 pod_ready.go:86] duration metric: took 3.729719ms for pod "coredns-5dd5756b68-5d9gm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:51.603764  281066 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-126117" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:51.607166  281066 pod_ready.go:94] pod "etcd-old-k8s-version-126117" is "Ready"
	I1209 02:35:51.607184  281066 pod_ready.go:86] duration metric: took 3.40334ms for pod "etcd-old-k8s-version-126117" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:51.609551  281066 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-126117" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:51.612888  281066 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-126117" is "Ready"
	I1209 02:35:51.612904  281066 pod_ready.go:86] duration metric: took 3.336587ms for pod "kube-apiserver-old-k8s-version-126117" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:51.615283  281066 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-126117" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:51.997962  281066 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-126117" is "Ready"
	I1209 02:35:51.997987  281066 pod_ready.go:86] duration metric: took 382.687176ms for pod "kube-controller-manager-old-k8s-version-126117" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.198651  281066 pod_ready.go:83] waiting for pod "kube-proxy-xjvf6" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.598773  281066 pod_ready.go:94] pod "kube-proxy-xjvf6" is "Ready"
	I1209 02:35:52.598799  281066 pod_ready.go:86] duration metric: took 400.127472ms for pod "kube-proxy-xjvf6" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.797934  281066 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-126117" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:53.197465  281066 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-126117" is "Ready"
	I1209 02:35:53.197488  281066 pod_ready.go:86] duration metric: took 399.531837ms for pod "kube-scheduler-old-k8s-version-126117" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:53.197498  281066 pod_ready.go:40] duration metric: took 1.603404495s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:35:53.240025  281066 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1209 02:35:53.241412  281066 out.go:203] 
	W1209 02:35:53.242630  281066 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1209 02:35:53.243734  281066 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1209 02:35:53.245081  281066 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-126117" cluster and "default" namespace by default
	I1209 02:35:52.080283  284952 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.083580  284952 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-512414" is "Ready"
	I1209 02:35:52.083601  284952 pod_ready.go:86] duration metric: took 3.295405ms for pod "kube-apiserver-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.085309  284952 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.271732  284952 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-512414" is "Ready"
	I1209 02:35:52.271759  284952 pod_ready.go:86] duration metric: took 186.431515ms for pod "kube-controller-manager-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.471464  284952 pod_ready.go:83] waiting for pod "kube-proxy-nkdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:52.871059  284952 pod_ready.go:94] pod "kube-proxy-nkdhm" is "Ready"
	I1209 02:35:52.871086  284952 pod_ready.go:86] duration metric: took 399.597434ms for pod "kube-proxy-nkdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:53.072366  284952 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:53.471750  284952 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-512414" is "Ready"
	I1209 02:35:53.471778  284952 pod_ready.go:86] duration metric: took 399.384607ms for pod "kube-scheduler-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:53.471790  284952 pod_ready.go:40] duration metric: took 2.406663395s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:35:53.514570  284952 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 02:35:53.515947  284952 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-512414" cluster and "default" namespace by default
	W1209 02:35:50.214125  282749 node_ready.go:57] node "no-preload-185074" has "Ready":"False" status (will retry)
	W1209 02:35:52.714109  282749 node_ready.go:57] node "no-preload-185074" has "Ready":"False" status (will retry)
	W1209 02:35:54.714360  282749 node_ready.go:57] node "no-preload-185074" has "Ready":"False" status (will retry)
	I1209 02:35:57.213256  282749 node_ready.go:49] node "no-preload-185074" is "Ready"
	I1209 02:35:57.213287  282749 node_ready.go:38] duration metric: took 13.502437462s for node "no-preload-185074" to be "Ready" ...
	I1209 02:35:57.213301  282749 api_server.go:52] waiting for apiserver process to appear ...
	I1209 02:35:57.213345  282749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:35:57.225388  282749 api_server.go:72] duration metric: took 13.793692181s to wait for apiserver process to appear ...
	I1209 02:35:57.225417  282749 api_server.go:88] waiting for apiserver healthz status ...
	I1209 02:35:57.225437  282749 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1209 02:35:57.229831  282749 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1209 02:35:57.230595  282749 api_server.go:141] control plane version: v1.35.0-beta.0
	I1209 02:35:57.230617  282749 api_server.go:131] duration metric: took 5.194695ms to wait for apiserver health ...
	I1209 02:35:57.230625  282749 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 02:35:57.233579  282749 system_pods.go:59] 8 kube-system pods found
	I1209 02:35:57.233607  282749 system_pods.go:61] "coredns-7d764666f9-m6tbs" [11973463-7b09-4a70-ba86-1a54c90ed6e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:35:57.233616  282749 system_pods.go:61] "etcd-no-preload-185074" [b31ecaf6-becd-44cc-86ba-46c923df2492] Running
	I1209 02:35:57.233622  282749 system_pods.go:61] "kindnet-pflxj" [712b93ed-2f9a-4e6b-a402-8e7349db1b72] Running
	I1209 02:35:57.233669  282749 system_pods.go:61] "kube-apiserver-no-preload-185074" [890e1890-37c7-4051-90c6-6ce11fab1cd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 02:35:57.233679  282749 system_pods.go:61] "kube-controller-manager-no-preload-185074" [4b61169a-80a3-46e0-93d4-894c2007372b] Running
	I1209 02:35:57.233684  282749 system_pods.go:61] "kube-proxy-8jh88" [f8108d3b-c4c6-41e0-81a1-d6acff22e510] Running
	I1209 02:35:57.233688  282749 system_pods.go:61] "kube-scheduler-no-preload-185074" [02f83411-c262-4f83-a198-a37586abe4c7] Running
	I1209 02:35:57.233694  282749 system_pods.go:61] "storage-provisioner" [04833b92-89ee-467b-8b6d-27fdfa7ddb79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:35:57.233701  282749 system_pods.go:74] duration metric: took 3.071955ms to wait for pod list to return data ...
	I1209 02:35:57.233707  282749 default_sa.go:34] waiting for default service account to be created ...
	I1209 02:35:57.235784  282749 default_sa.go:45] found service account: "default"
	I1209 02:35:57.235802  282749 default_sa.go:55] duration metric: took 2.089652ms for default service account to be created ...
	I1209 02:35:57.235811  282749 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 02:35:57.238117  282749 system_pods.go:86] 8 kube-system pods found
	I1209 02:35:57.238145  282749 system_pods.go:89] "coredns-7d764666f9-m6tbs" [11973463-7b09-4a70-ba86-1a54c90ed6e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:35:57.238154  282749 system_pods.go:89] "etcd-no-preload-185074" [b31ecaf6-becd-44cc-86ba-46c923df2492] Running
	I1209 02:35:57.238165  282749 system_pods.go:89] "kindnet-pflxj" [712b93ed-2f9a-4e6b-a402-8e7349db1b72] Running
	I1209 02:35:57.238178  282749 system_pods.go:89] "kube-apiserver-no-preload-185074" [890e1890-37c7-4051-90c6-6ce11fab1cd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 02:35:57.238184  282749 system_pods.go:89] "kube-controller-manager-no-preload-185074" [4b61169a-80a3-46e0-93d4-894c2007372b] Running
	I1209 02:35:57.238191  282749 system_pods.go:89] "kube-proxy-8jh88" [f8108d3b-c4c6-41e0-81a1-d6acff22e510] Running
	I1209 02:35:57.238200  282749 system_pods.go:89] "kube-scheduler-no-preload-185074" [02f83411-c262-4f83-a198-a37586abe4c7] Running
	I1209 02:35:57.238208  282749 system_pods.go:89] "storage-provisioner" [04833b92-89ee-467b-8b6d-27fdfa7ddb79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:35:57.238232  282749 retry.go:31] will retry after 228.767315ms: missing components: kube-dns
	I1209 02:35:57.470671  282749 system_pods.go:86] 8 kube-system pods found
	I1209 02:35:57.470703  282749 system_pods.go:89] "coredns-7d764666f9-m6tbs" [11973463-7b09-4a70-ba86-1a54c90ed6e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:35:57.470712  282749 system_pods.go:89] "etcd-no-preload-185074" [b31ecaf6-becd-44cc-86ba-46c923df2492] Running
	I1209 02:35:57.470720  282749 system_pods.go:89] "kindnet-pflxj" [712b93ed-2f9a-4e6b-a402-8e7349db1b72] Running
	I1209 02:35:57.470725  282749 system_pods.go:89] "kube-apiserver-no-preload-185074" [890e1890-37c7-4051-90c6-6ce11fab1cd3] Running
	I1209 02:35:57.470731  282749 system_pods.go:89] "kube-controller-manager-no-preload-185074" [4b61169a-80a3-46e0-93d4-894c2007372b] Running
	I1209 02:35:57.470737  282749 system_pods.go:89] "kube-proxy-8jh88" [f8108d3b-c4c6-41e0-81a1-d6acff22e510] Running
	I1209 02:35:57.470742  282749 system_pods.go:89] "kube-scheduler-no-preload-185074" [02f83411-c262-4f83-a198-a37586abe4c7] Running
	I1209 02:35:57.470751  282749 system_pods.go:89] "storage-provisioner" [04833b92-89ee-467b-8b6d-27fdfa7ddb79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:35:57.470773  282749 retry.go:31] will retry after 350.950435ms: missing components: kube-dns
	I1209 02:35:57.826216  282749 system_pods.go:86] 8 kube-system pods found
	I1209 02:35:57.826246  282749 system_pods.go:89] "coredns-7d764666f9-m6tbs" [11973463-7b09-4a70-ba86-1a54c90ed6e5] Running
	I1209 02:35:57.826255  282749 system_pods.go:89] "etcd-no-preload-185074" [b31ecaf6-becd-44cc-86ba-46c923df2492] Running
	I1209 02:35:57.826260  282749 system_pods.go:89] "kindnet-pflxj" [712b93ed-2f9a-4e6b-a402-8e7349db1b72] Running
	I1209 02:35:57.826265  282749 system_pods.go:89] "kube-apiserver-no-preload-185074" [890e1890-37c7-4051-90c6-6ce11fab1cd3] Running
	I1209 02:35:57.826271  282749 system_pods.go:89] "kube-controller-manager-no-preload-185074" [4b61169a-80a3-46e0-93d4-894c2007372b] Running
	I1209 02:35:57.826275  282749 system_pods.go:89] "kube-proxy-8jh88" [f8108d3b-c4c6-41e0-81a1-d6acff22e510] Running
	I1209 02:35:57.826281  282749 system_pods.go:89] "kube-scheduler-no-preload-185074" [02f83411-c262-4f83-a198-a37586abe4c7] Running
	I1209 02:35:57.826288  282749 system_pods.go:89] "storage-provisioner" [04833b92-89ee-467b-8b6d-27fdfa7ddb79] Running
	I1209 02:35:57.826298  282749 system_pods.go:126] duration metric: took 590.479788ms to wait for k8s-apps to be running ...
	I1209 02:35:57.826312  282749 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 02:35:57.826361  282749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:35:57.839092  282749 system_svc.go:56] duration metric: took 12.765246ms WaitForService to wait for kubelet
	I1209 02:35:57.839119  282749 kubeadm.go:587] duration metric: took 14.407429281s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:35:57.839135  282749 node_conditions.go:102] verifying NodePressure condition ...
	I1209 02:35:57.841524  282749 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 02:35:57.841549  282749 node_conditions.go:123] node cpu capacity is 8
	I1209 02:35:57.841572  282749 node_conditions.go:105] duration metric: took 2.430893ms to run NodePressure ...
	I1209 02:35:57.841586  282749 start.go:242] waiting for startup goroutines ...
	I1209 02:35:57.841599  282749 start.go:247] waiting for cluster config update ...
	I1209 02:35:57.841612  282749 start.go:256] writing updated cluster config ...
	I1209 02:35:57.841919  282749 ssh_runner.go:195] Run: rm -f paused
	I1209 02:35:57.845884  282749 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:35:57.848652  282749 pod_ready.go:83] waiting for pod "coredns-7d764666f9-m6tbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:57.852327  282749 pod_ready.go:94] pod "coredns-7d764666f9-m6tbs" is "Ready"
	I1209 02:35:57.852346  282749 pod_ready.go:86] duration metric: took 3.670983ms for pod "coredns-7d764666f9-m6tbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:57.854036  282749 pod_ready.go:83] waiting for pod "etcd-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:57.857254  282749 pod_ready.go:94] pod "etcd-no-preload-185074" is "Ready"
	I1209 02:35:57.857271  282749 pod_ready.go:86] duration metric: took 3.218113ms for pod "etcd-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:57.858917  282749 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:57.865892  282749 pod_ready.go:94] pod "kube-apiserver-no-preload-185074" is "Ready"
	I1209 02:35:57.865915  282749 pod_ready.go:86] duration metric: took 6.979488ms for pod "kube-apiserver-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:57.873323  282749 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:58.249620  282749 pod_ready.go:94] pod "kube-controller-manager-no-preload-185074" is "Ready"
	I1209 02:35:58.249671  282749 pod_ready.go:86] duration metric: took 376.326266ms for pod "kube-controller-manager-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:58.449785  282749 pod_ready.go:83] waiting for pod "kube-proxy-8jh88" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:58.849286  282749 pod_ready.go:94] pod "kube-proxy-8jh88" is "Ready"
	I1209 02:35:58.849312  282749 pod_ready.go:86] duration metric: took 399.501501ms for pod "kube-proxy-8jh88" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:59.050355  282749 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:59.449952  282749 pod_ready.go:94] pod "kube-scheduler-no-preload-185074" is "Ready"
	I1209 02:35:59.449985  282749 pod_ready.go:86] duration metric: took 399.60648ms for pod "kube-scheduler-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:35:59.450001  282749 pod_ready.go:40] duration metric: took 1.604087976s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:35:59.493893  282749 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1209 02:35:59.495629  282749 out.go:179] * Done! kubectl is now configured to use "no-preload-185074" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 02:35:51 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:51.276352357Z" level=info msg="Starting container: f695e4b5cfc38c8c16e8ff7ccba83f113abd9eb4b659eb280ec0f8e272b2b2fc" id=f8b86d92-3a91-449b-9bf9-9eeb89825b6d name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:35:51 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:51.278462991Z" level=info msg="Started container" PID=2171 containerID=f695e4b5cfc38c8c16e8ff7ccba83f113abd9eb4b659eb280ec0f8e272b2b2fc description=kube-system/coredns-5dd5756b68-5d9gm/coredns id=f8b86d92-3a91-449b-9bf9-9eeb89825b6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa39ab2af4670617117ad7730cbe004b8e63c313028e2af8331e33b77d948d31
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.734029204Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9ea6f7fa-f261-4a44-b4f2-d580ec24cb55 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.734127702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.739346016Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4f0afdea7d0bbbf6dbcc02d445ecaae8f59e352a3c013e92df2a155a71d44576 UID:32ad79f5-6d8a-4a14-aefb-defd3600eb69 NetNS:/var/run/netns/0372e449-2da2-4c8c-ac63-48cdcd9a5ce3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c807d0}] Aliases:map[]}"
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.739371831Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.749112233Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4f0afdea7d0bbbf6dbcc02d445ecaae8f59e352a3c013e92df2a155a71d44576 UID:32ad79f5-6d8a-4a14-aefb-defd3600eb69 NetNS:/var/run/netns/0372e449-2da2-4c8c-ac63-48cdcd9a5ce3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c807d0}] Aliases:map[]}"
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.749227567Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.749906627Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.75070565Z" level=info msg="Ran pod sandbox 4f0afdea7d0bbbf6dbcc02d445ecaae8f59e352a3c013e92df2a155a71d44576 with infra container: default/busybox/POD" id=9ea6f7fa-f261-4a44-b4f2-d580ec24cb55 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.75184758Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0c343ebd-7fd7-46b3-a24b-1e08f6f8472c name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.751973742Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0c343ebd-7fd7-46b3-a24b-1e08f6f8472c name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.752013081Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0c343ebd-7fd7-46b3-a24b-1e08f6f8472c name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.752466324Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2f1e68bf-1b5b-4fa0-ba08-70d65c21bc4e name=/runtime.v1.ImageService/PullImage
	Dec 09 02:35:53 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:53.753922053Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 09 02:35:54 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:54.356472108Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2f1e68bf-1b5b-4fa0-ba08-70d65c21bc4e name=/runtime.v1.ImageService/PullImage
	Dec 09 02:35:54 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:54.357161232Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f1940874-5730-4863-adf9-75a7e7627fb5 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:54 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:54.358063734Z" level=info msg="Creating container: default/busybox/busybox" id=ed97808b-207e-4c14-93e2-8cbbdbac291e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:35:54 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:54.358170136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:35:54 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:54.36153326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:35:54 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:54.362055597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:35:54 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:54.397605468Z" level=info msg="Created container a2ee363566448ea331de274aa69634c7eef171da3f267548f82d281f4be10653: default/busybox/busybox" id=ed97808b-207e-4c14-93e2-8cbbdbac291e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:35:54 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:54.398054611Z" level=info msg="Starting container: a2ee363566448ea331de274aa69634c7eef171da3f267548f82d281f4be10653" id=b180e034-b018-45f8-acc0-5d575d00d6c6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:35:54 old-k8s-version-126117 crio[779]: time="2025-12-09T02:35:54.399852369Z" level=info msg="Started container" PID=2246 containerID=a2ee363566448ea331de274aa69634c7eef171da3f267548f82d281f4be10653 description=default/busybox/busybox id=b180e034-b018-45f8-acc0-5d575d00d6c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f0afdea7d0bbbf6dbcc02d445ecaae8f59e352a3c013e92df2a155a71d44576
	Dec 09 02:36:00 old-k8s-version-126117 crio[779]: time="2025-12-09T02:36:00.512572693Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	a2ee363566448       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   4f0afdea7d0bb       busybox                                          default
	f695e4b5cfc38       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      10 seconds ago      Running             coredns                   0                   aa39ab2af4670       coredns-5dd5756b68-5d9gm                         kube-system
	23ecd2a657f5a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   af356c0c6b9d9       storage-provisioner                              kube-system
	b39a2e2b9fc42       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    21 seconds ago      Running             kindnet-cni               0                   d1e582215725b       kindnet-xk6zs                                    kube-system
	036f090dd6700       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      23 seconds ago      Running             kube-proxy                0                   e8947db358c9c       kube-proxy-xjvf6                                 kube-system
	c0ca283a997c3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   e476e2d1e9896       etcd-old-k8s-version-126117                      kube-system
	fbafb73b161d4       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   1eb318bf9075d       kube-scheduler-old-k8s-version-126117            kube-system
	f46733fa9b9b7       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   1cb983246de5c       kube-controller-manager-old-k8s-version-126117   kube-system
	cb6e22e60986e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   9b841e9dadd24       kube-apiserver-old-k8s-version-126117            kube-system
	
	
	==> coredns [f695e4b5cfc38c8c16e8ff7ccba83f113abd9eb4b659eb280ec0f8e272b2b2fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56801 - 50389 "HINFO IN 3502770326383462675.5556365956681974319. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064357924s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-126117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-126117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=old-k8s-version-126117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:35:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-126117
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:35:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:35:55 +0000   Tue, 09 Dec 2025 02:35:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:35:55 +0000   Tue, 09 Dec 2025 02:35:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:35:55 +0000   Tue, 09 Dec 2025 02:35:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:35:55 +0000   Tue, 09 Dec 2025 02:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-126117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                fe5af2e7-907b-43f5-907c-9c3129342d44
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-5dd5756b68-5d9gm                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-old-k8s-version-126117                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-xk6zs                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-old-k8s-version-126117             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-old-k8s-version-126117    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-xjvf6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-old-k8s-version-126117             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 37s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s   kubelet          Node old-k8s-version-126117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s   kubelet          Node old-k8s-version-126117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s   kubelet          Node old-k8s-version-126117 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node old-k8s-version-126117 event: Registered Node old-k8s-version-126117 in Controller
	  Normal  NodeReady                11s   kubelet          Node old-k8s-version-126117 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [c0ca283a997c391c1f3f3bf58185c9294c0b1e9aa356f560ae0c36646158c7c4] <==
	{"level":"info","ts":"2025-12-09T02:35:19.176744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-09T02:35:19.176926Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-09T02:35:19.177731Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-09T02:35:19.17794Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-09T02:35:19.17797Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-09T02:35:19.178232Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-09T02:35:19.17827Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-09T02:35:19.565425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-09T02:35:19.56548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-09T02:35:19.565521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-09T02:35:19.56554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-09T02:35:19.565549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-09T02:35:19.56556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-09T02:35:19.56557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-09T02:35:19.566237Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-09T02:35:19.566859Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-126117 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-09T02:35:19.566866Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-09T02:35:19.566901Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-09T02:35:19.567921Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-09T02:35:19.568253Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-09T02:35:19.568845Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-09T02:35:19.571414Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-09T02:35:19.571537Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-09T02:35:19.572971Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-09T02:35:19.573075Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 02:36:02 up  1:18,  0 user,  load average: 3.09, 2.40, 1.76
	Linux old-k8s-version-126117 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b39a2e2b9fc428890d5f152498066cc5e2d8fc0014447a6f085a1c8badbf1dc9] <==
	I1209 02:35:40.135477       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:35:40.230464       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1209 02:35:40.230766       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:35:40.230895       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:35:40.230934       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:35:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:35:40.432659       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:35:40.432763       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:35:40.432781       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:35:40.432915       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:35:40.830163       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:35:40.830197       1 metrics.go:72] Registering metrics
	I1209 02:35:40.830357       1 controller.go:711] "Syncing nftables rules"
	I1209 02:35:50.436813       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:35:50.436867       1 main.go:301] handling current node
	I1209 02:36:00.435714       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:36:00.435753       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cb6e22e60986eb41733d8665783235655228b3315a636e84e20ba464534e7f79] <==
	I1209 02:35:21.282158       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1209 02:35:21.282255       1 aggregator.go:166] initial CRD sync complete...
	I1209 02:35:21.282264       1 autoregister_controller.go:141] Starting autoregister controller
	I1209 02:35:21.282274       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 02:35:21.282281       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:35:21.283362       1 controller.go:624] quota admission added evaluator for: namespaces
	I1209 02:35:21.284176       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1209 02:35:21.287671       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1209 02:35:21.287688       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1209 02:35:21.322479       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:35:22.186881       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1209 02:35:22.192044       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1209 02:35:22.192065       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:35:22.612449       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:35:22.647995       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:35:22.690687       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1209 02:35:22.696240       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1209 02:35:22.697083       1 controller.go:624] quota admission added evaluator for: endpoints
	I1209 02:35:22.701875       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:35:23.241212       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1209 02:35:24.286625       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1209 02:35:24.295611       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 02:35:24.307911       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1209 02:35:37.479577       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1209 02:35:37.578914       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [f46733fa9b9b774d743ccf3c027094de05b98fe7d8eb61ac67a7f12f92b03cb0] <==
	I1209 02:35:37.024534       1 shared_informer.go:318] Caches are synced for service account
	I1209 02:35:37.078540       1 shared_informer.go:318] Caches are synced for namespace
	I1209 02:35:37.396735       1 shared_informer.go:318] Caches are synced for garbage collector
	I1209 02:35:37.472948       1 shared_informer.go:318] Caches are synced for garbage collector
	I1209 02:35:37.472992       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1209 02:35:37.483538       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1209 02:35:37.587366       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xjvf6"
	I1209 02:35:37.588753       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xk6zs"
	I1209 02:35:37.882326       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fxlr8"
	I1209 02:35:37.889325       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-5d9gm"
	I1209 02:35:37.897381       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="413.803811ms"
	I1209 02:35:37.910332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.613789ms"
	I1209 02:35:37.910616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="226.157µs"
	I1209 02:35:37.913170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.149µs"
	I1209 02:35:38.268683       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1209 02:35:38.283120       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-fxlr8"
	I1209 02:35:38.289517       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.983497ms"
	I1209 02:35:38.295652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.073433ms"
	I1209 02:35:38.295929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.15µs"
	I1209 02:35:50.930299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.793µs"
	I1209 02:35:50.941476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.705µs"
	I1209 02:35:51.442107       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.568µs"
	I1209 02:35:51.458113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.696194ms"
	I1209 02:35:51.458243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.662µs"
	I1209 02:35:51.977442       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [036f090dd67005e2a6d33b207af29f6035d6ed1cfb0d70e4d2d8d3fa7493e27a] <==
	I1209 02:35:38.130602       1 server_others.go:69] "Using iptables proxy"
	I1209 02:35:38.141406       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1209 02:35:38.166107       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:35:38.170149       1 server_others.go:152] "Using iptables Proxier"
	I1209 02:35:38.170246       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1209 02:35:38.170285       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1209 02:35:38.170323       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1209 02:35:38.170663       1 server.go:846] "Version info" version="v1.28.0"
	I1209 02:35:38.170838       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:35:38.171703       1 config.go:315] "Starting node config controller"
	I1209 02:35:38.172804       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1209 02:35:38.172836       1 config.go:188] "Starting service config controller"
	I1209 02:35:38.172841       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1209 02:35:38.172872       1 config.go:97] "Starting endpoint slice config controller"
	I1209 02:35:38.172877       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1209 02:35:38.273398       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1209 02:35:38.273430       1 shared_informer.go:318] Caches are synced for service config
	I1209 02:35:38.273740       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [fbafb73b161d4150a8c47374c93e23789186ac872999e77f16f7e1a9da2c62a0] <==
	W1209 02:35:21.247173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 02:35:21.247200       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1209 02:35:21.247231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1209 02:35:21.247177       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 02:35:21.247247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 02:35:21.247253       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1209 02:35:21.247258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 02:35:21.247274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1209 02:35:21.247325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 02:35:21.247338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1209 02:35:21.247445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 02:35:21.247475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1209 02:35:21.247574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 02:35:21.247589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1209 02:35:22.159789       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1209 02:35:22.159822       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1209 02:35:22.298937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 02:35:22.299040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1209 02:35:22.303308       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 02:35:22.303337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1209 02:35:22.357458       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 02:35:22.357491       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:35:22.370157       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1209 02:35:22.370194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1209 02:35:24.544143       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 02:35:36 old-k8s-version-126117 kubelet[1408]: I1209 02:35:36.930542    1408 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 02:35:37 old-k8s-version-126117 kubelet[1408]: I1209 02:35:37.593352    1408 topology_manager.go:215] "Topology Admit Handler" podUID="47edaf66-8fca-4651-bdef-9d865250c8fe" podNamespace="kube-system" podName="kube-proxy-xjvf6"
	Dec 09 02:35:37 old-k8s-version-126117 kubelet[1408]: I1209 02:35:37.594805    1408 topology_manager.go:215] "Topology Admit Handler" podUID="e479d613-5da4-4db3-b0ff-799cda129c50" podNamespace="kube-system" podName="kindnet-xk6zs"
	Dec 09 02:35:37 old-k8s-version-126117 kubelet[1408]: I1209 02:35:37.644083    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e479d613-5da4-4db3-b0ff-799cda129c50-xtables-lock\") pod \"kindnet-xk6zs\" (UID: \"e479d613-5da4-4db3-b0ff-799cda129c50\") " pod="kube-system/kindnet-xk6zs"
	Dec 09 02:35:37 old-k8s-version-126117 kubelet[1408]: I1209 02:35:37.644156    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tffk\" (UniqueName: \"kubernetes.io/projected/47edaf66-8fca-4651-bdef-9d865250c8fe-kube-api-access-8tffk\") pod \"kube-proxy-xjvf6\" (UID: \"47edaf66-8fca-4651-bdef-9d865250c8fe\") " pod="kube-system/kube-proxy-xjvf6"
	Dec 09 02:35:37 old-k8s-version-126117 kubelet[1408]: I1209 02:35:37.644190    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47edaf66-8fca-4651-bdef-9d865250c8fe-kube-proxy\") pod \"kube-proxy-xjvf6\" (UID: \"47edaf66-8fca-4651-bdef-9d865250c8fe\") " pod="kube-system/kube-proxy-xjvf6"
	Dec 09 02:35:37 old-k8s-version-126117 kubelet[1408]: I1209 02:35:37.644215    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e479d613-5da4-4db3-b0ff-799cda129c50-lib-modules\") pod \"kindnet-xk6zs\" (UID: \"e479d613-5da4-4db3-b0ff-799cda129c50\") " pod="kube-system/kindnet-xk6zs"
	Dec 09 02:35:37 old-k8s-version-126117 kubelet[1408]: I1209 02:35:37.644243    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrs7\" (UniqueName: \"kubernetes.io/projected/e479d613-5da4-4db3-b0ff-799cda129c50-kube-api-access-nfrs7\") pod \"kindnet-xk6zs\" (UID: \"e479d613-5da4-4db3-b0ff-799cda129c50\") " pod="kube-system/kindnet-xk6zs"
	Dec 09 02:35:37 old-k8s-version-126117 kubelet[1408]: I1209 02:35:37.644277    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47edaf66-8fca-4651-bdef-9d865250c8fe-lib-modules\") pod \"kube-proxy-xjvf6\" (UID: \"47edaf66-8fca-4651-bdef-9d865250c8fe\") " pod="kube-system/kube-proxy-xjvf6"
	Dec 09 02:35:37 old-k8s-version-126117 kubelet[1408]: I1209 02:35:37.644310    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e479d613-5da4-4db3-b0ff-799cda129c50-cni-cfg\") pod \"kindnet-xk6zs\" (UID: \"e479d613-5da4-4db3-b0ff-799cda129c50\") " pod="kube-system/kindnet-xk6zs"
	Dec 09 02:35:37 old-k8s-version-126117 kubelet[1408]: I1209 02:35:37.644344    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47edaf66-8fca-4651-bdef-9d865250c8fe-xtables-lock\") pod \"kube-proxy-xjvf6\" (UID: \"47edaf66-8fca-4651-bdef-9d865250c8fe\") " pod="kube-system/kube-proxy-xjvf6"
	Dec 09 02:35:40 old-k8s-version-126117 kubelet[1408]: I1209 02:35:40.421524    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xjvf6" podStartSLOduration=3.421474088 podCreationTimestamp="2025-12-09 02:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:38.418340274 +0000 UTC m=+14.155952671" watchObservedRunningTime="2025-12-09 02:35:40.421474088 +0000 UTC m=+16.159086490"
	Dec 09 02:35:40 old-k8s-version-126117 kubelet[1408]: I1209 02:35:40.421723    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-xk6zs" podStartSLOduration=1.428784807 podCreationTimestamp="2025-12-09 02:35:37 +0000 UTC" firstStartedPulling="2025-12-09 02:35:37.917795254 +0000 UTC m=+13.655407711" lastFinishedPulling="2025-12-09 02:35:39.910698325 +0000 UTC m=+15.648310720" observedRunningTime="2025-12-09 02:35:40.421674095 +0000 UTC m=+16.159286493" watchObservedRunningTime="2025-12-09 02:35:40.421687816 +0000 UTC m=+16.159300219"
	Dec 09 02:35:50 old-k8s-version-126117 kubelet[1408]: I1209 02:35:50.910384    1408 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 09 02:35:50 old-k8s-version-126117 kubelet[1408]: I1209 02:35:50.930574    1408 topology_manager.go:215] "Topology Admit Handler" podUID="337573f0-bfed-495c-b553-9c3f5f0625ef" podNamespace="kube-system" podName="coredns-5dd5756b68-5d9gm"
	Dec 09 02:35:50 old-k8s-version-126117 kubelet[1408]: I1209 02:35:50.930839    1408 topology_manager.go:215] "Topology Admit Handler" podUID="beb552c5-ebdc-4c05-83a0-8236708b3afc" podNamespace="kube-system" podName="storage-provisioner"
	Dec 09 02:35:51 old-k8s-version-126117 kubelet[1408]: I1209 02:35:51.039063    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkgqp\" (UniqueName: \"kubernetes.io/projected/337573f0-bfed-495c-b553-9c3f5f0625ef-kube-api-access-dkgqp\") pod \"coredns-5dd5756b68-5d9gm\" (UID: \"337573f0-bfed-495c-b553-9c3f5f0625ef\") " pod="kube-system/coredns-5dd5756b68-5d9gm"
	Dec 09 02:35:51 old-k8s-version-126117 kubelet[1408]: I1209 02:35:51.039106    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5zl\" (UniqueName: \"kubernetes.io/projected/beb552c5-ebdc-4c05-83a0-8236708b3afc-kube-api-access-wm5zl\") pod \"storage-provisioner\" (UID: \"beb552c5-ebdc-4c05-83a0-8236708b3afc\") " pod="kube-system/storage-provisioner"
	Dec 09 02:35:51 old-k8s-version-126117 kubelet[1408]: I1209 02:35:51.039134    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/337573f0-bfed-495c-b553-9c3f5f0625ef-config-volume\") pod \"coredns-5dd5756b68-5d9gm\" (UID: \"337573f0-bfed-495c-b553-9c3f5f0625ef\") " pod="kube-system/coredns-5dd5756b68-5d9gm"
	Dec 09 02:35:51 old-k8s-version-126117 kubelet[1408]: I1209 02:35:51.039155    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/beb552c5-ebdc-4c05-83a0-8236708b3afc-tmp\") pod \"storage-provisioner\" (UID: \"beb552c5-ebdc-4c05-83a0-8236708b3afc\") " pod="kube-system/storage-provisioner"
	Dec 09 02:35:51 old-k8s-version-126117 kubelet[1408]: I1209 02:35:51.451262    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5d9gm" podStartSLOduration=14.451217592999999 podCreationTimestamp="2025-12-09 02:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:51.44217852 +0000 UTC m=+27.179790921" watchObservedRunningTime="2025-12-09 02:35:51.451217593 +0000 UTC m=+27.188830017"
	Dec 09 02:35:53 old-k8s-version-126117 kubelet[1408]: I1209 02:35:53.431677    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.431602206 podCreationTimestamp="2025-12-09 02:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:51.460305614 +0000 UTC m=+27.197918016" watchObservedRunningTime="2025-12-09 02:35:53.431602206 +0000 UTC m=+29.169214608"
	Dec 09 02:35:53 old-k8s-version-126117 kubelet[1408]: I1209 02:35:53.432085    1408 topology_manager.go:215] "Topology Admit Handler" podUID="32ad79f5-6d8a-4a14-aefb-defd3600eb69" podNamespace="default" podName="busybox"
	Dec 09 02:35:53 old-k8s-version-126117 kubelet[1408]: I1209 02:35:53.452862    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9dwr\" (UniqueName: \"kubernetes.io/projected/32ad79f5-6d8a-4a14-aefb-defd3600eb69-kube-api-access-k9dwr\") pod \"busybox\" (UID: \"32ad79f5-6d8a-4a14-aefb-defd3600eb69\") " pod="default/busybox"
	Dec 09 02:35:54 old-k8s-version-126117 kubelet[1408]: I1209 02:35:54.446814    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.842238587 podCreationTimestamp="2025-12-09 02:35:53 +0000 UTC" firstStartedPulling="2025-12-09 02:35:53.752162182 +0000 UTC m=+29.489774567" lastFinishedPulling="2025-12-09 02:35:54.356681265 +0000 UTC m=+30.094293652" observedRunningTime="2025-12-09 02:35:54.4464376 +0000 UTC m=+30.184050019" watchObservedRunningTime="2025-12-09 02:35:54.446757672 +0000 UTC m=+30.184370077"
	
	
	==> storage-provisioner [23ecd2a657f5a7243256865bd46555c11cffc99c7ae69a4bed9d12a4c6fe67f7] <==
	I1209 02:35:51.286254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:35:51.296239       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:35:51.296284       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 02:35:51.302505       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:35:51.302576       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85c09fe8-de97-42ff-bfa4-d07a489e759c", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-126117_57c198ef-0377-4d64-bc19-12d8344dee54 became leader
	I1209 02:35:51.302676       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-126117_57c198ef-0377-4d64-bc19-12d8344dee54!
	I1209 02:35:51.403779       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-126117_57c198ef-0377-4d64-bc19-12d8344dee54!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-126117 -n old-k8s-version-126117
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-126117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-512414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-512414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (273.317572ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:36:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-512414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-512414 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-512414 describe deploy/metrics-server -n kube-system: exit status 1 (76.800163ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-512414 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-512414
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-512414:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1",
	        "Created": "2025-12-09T02:35:16.836170165Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286588,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:35:16.871597952Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/hostname",
	        "HostsPath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/hosts",
	        "LogPath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1-json.log",
	        "Name": "/default-k8s-diff-port-512414",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-512414:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-512414",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1",
	                "LowerDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-512414",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-512414/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-512414",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-512414",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-512414",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0403a60ce05f11477cd9b5fd1c76a014f922933864b75719eeb7e5136575bbb4",
	            "SandboxKey": "/var/run/docker/netns/0403a60ce05f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-512414": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e16439d105c69dbf592b83cbbc24d475e1a7bdde09cef9f521cc22e0f04ea46e",
	                    "EndpointID": "5ec6bd40f7809891c1f522ccc93320a2dc842e03eef4da280690bb25a53a541b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "4a:2a:7a:bc:f3:85",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-512414",
	                        "eee17c4f2786"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-512414 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-512414 logs -n 25: (1.030647715s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-933067 sudo containerd config dump                                                                                                                                                                                                  │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ ssh     │ -p cilium-933067 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ ssh     │ -p cilium-933067 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ ssh     │ -p cilium-933067 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ ssh     │ -p cilium-933067 sudo crio config                                                                                                                                                                                                             │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ delete  │ -p cilium-933067                                                                                                                                                                                                                              │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │ 09 Dec 25 02:32 UTC │
	│ start   │ -p cert-expiration-572052 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │ 09 Dec 25 02:33 UTC │
	│ delete  │ -p stopped-upgrade-768415                                                                                                                                                                                                                     │ stopped-upgrade-768415       │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ start   │ -p force-systemd-flag-598501 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-598501    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ ssh     │ force-systemd-flag-598501 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-598501    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ delete  │ -p force-systemd-flag-598501                                                                                                                                                                                                                  │ force-systemd-flag-598501    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ start   │ -p cert-options-465214 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │                     │
	│ start   │ -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p kubernetes-upgrade-190944                                                                                                                                                                                                                  │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ ssh     │ cert-options-465214 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ ssh     │ -p cert-options-465214 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p cert-options-465214                                                                                                                                                                                                                        │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p running-upgrade-099378                                                                                                                                                                                                                     │ running-upgrade-099378       │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-126117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-512414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p cert-expiration-572052 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:36:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:36:01.895609  292942 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:36:01.895878  292942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:01.895882  292942 out.go:374] Setting ErrFile to fd 2...
	I1209 02:36:01.895885  292942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:01.896089  292942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:36:01.896482  292942 out.go:368] Setting JSON to false
	I1209 02:36:01.897762  292942 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4711,"bootTime":1765243051,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:36:01.897810  292942 start.go:143] virtualization: kvm guest
	I1209 02:36:01.899831  292942 out.go:179] * [cert-expiration-572052] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:36:01.900909  292942 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:36:01.900947  292942 notify.go:221] Checking for updates...
	I1209 02:36:01.903339  292942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:36:01.904729  292942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:01.905891  292942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:36:01.907604  292942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:36:01.909053  292942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:36:01.911032  292942 config.go:182] Loaded profile config "cert-expiration-572052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:36:01.911876  292942 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:36:01.938767  292942 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:36:01.938926  292942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:02.004755  292942 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-09 02:36:01.993912245 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:02.004912  292942 docker.go:319] overlay module found
	I1209 02:36:02.006751  292942 out.go:179] * Using the docker driver based on existing profile
	I1209 02:36:02.007852  292942 start.go:309] selected driver: docker
	I1209 02:36:02.007862  292942 start.go:927] validating driver "docker" against &{Name:cert-expiration-572052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-572052 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:02.007962  292942 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:36:02.008738  292942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:02.076288  292942 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-09 02:36:02.066049329 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:02.076583  292942 cni.go:84] Creating CNI manager for ""
	I1209 02:36:02.076667  292942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:36:02.076717  292942 start.go:353] cluster config:
	{Name:cert-expiration-572052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-572052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:02.077985  292942 out.go:179] * Starting "cert-expiration-572052" primary control-plane node in "cert-expiration-572052" cluster
	I1209 02:36:02.078927  292942 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:36:02.079908  292942 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:36:02.080982  292942 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:36:02.081005  292942 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:36:02.081013  292942 cache.go:65] Caching tarball of preloaded images
	I1209 02:36:02.081074  292942 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:36:02.081071  292942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:36:02.081090  292942 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:36:02.081167  292942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/config.json ...
	I1209 02:36:02.105456  292942 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:36:02.105469  292942 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:36:02.105488  292942 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:36:02.105523  292942 start.go:360] acquireMachinesLock for cert-expiration-572052: {Name:mke7bd2ad125f2d9e8ba50be09e124c4335ae276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:02.105606  292942 start.go:364] duration metric: took 65.933µs to acquireMachinesLock for "cert-expiration-572052"
	I1209 02:36:02.105621  292942 start.go:96] Skipping create...Using existing machine configuration
	I1209 02:36:02.105626  292942 fix.go:54] fixHost starting: 
	I1209 02:36:02.105933  292942 cli_runner.go:164] Run: docker container inspect cert-expiration-572052 --format={{.State.Status}}
	I1209 02:36:02.125965  292942 fix.go:112] recreateIfNeeded on cert-expiration-572052: state=Running err=<nil>
	W1209 02:36:02.126020  292942 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 09 02:35:50 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:50.859079708Z" level=info msg="Starting container: 86f4e3ea6b37fa24953cf17027fd6152320ed56502add56514d5aa021611d8d7" id=ae1c36e2-97a9-4a72-9068-1c8153fa55c5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:35:50 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:50.861084327Z" level=info msg="Started container" PID=1864 containerID=86f4e3ea6b37fa24953cf17027fd6152320ed56502add56514d5aa021611d8d7 description=kube-system/coredns-66bc5c9577-gtkkc/coredns id=ae1c36e2-97a9-4a72-9068-1c8153fa55c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a674e6ba7525cf862826cb8c90be4ab3fc2e7e04c617b43110084ce1ebda86d5
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.966360769Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3776a79f-9623-418a-96c4-b38d58df85b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.966424591Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.970857936Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:22369f0f52bf36b15a53cd2f51b9865e4504456f03b88753200e29be39ab7a4d UID:ab74c108-2004-4878-a264-225156656ac5 NetNS:/var/run/netns/71345fa0-4c24-4810-b5d5-ea4151b569e5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00060e628}] Aliases:map[]}"
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.970884309Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.980895761Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:22369f0f52bf36b15a53cd2f51b9865e4504456f03b88753200e29be39ab7a4d UID:ab74c108-2004-4878-a264-225156656ac5 NetNS:/var/run/netns/71345fa0-4c24-4810-b5d5-ea4151b569e5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00060e628}] Aliases:map[]}"
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.981036495Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.981738067Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.98247967Z" level=info msg="Ran pod sandbox 22369f0f52bf36b15a53cd2f51b9865e4504456f03b88753200e29be39ab7a4d with infra container: default/busybox/POD" id=3776a79f-9623-418a-96c4-b38d58df85b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.983564757Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=322c2093-9d24-4f87-a4c7-4314d81643d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.983720157Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=322c2093-9d24-4f87-a4c7-4314d81643d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.983770835Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=322c2093-9d24-4f87-a4c7-4314d81643d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.98448346Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=31bcfa48-9491-422b-af57-517e061f0112 name=/runtime.v1.ImageService/PullImage
	Dec 09 02:35:53 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:53.986348029Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 09 02:35:54 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:54.636893657Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=31bcfa48-9491-422b-af57-517e061f0112 name=/runtime.v1.ImageService/PullImage
	Dec 09 02:35:54 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:54.637574621Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a25d1167-8a7a-41d3-b171-81b7de4d1df8 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:54 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:54.638973197Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=61393a48-2af5-494d-afeb-050f9e633616 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:54 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:54.642148662Z" level=info msg="Creating container: default/busybox/busybox" id=7081e702-af2d-4e02-afd6-431edf58ca52 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:35:54 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:54.642260136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:35:54 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:54.64600903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:35:54 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:54.646455394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:35:54 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:54.671444864Z" level=info msg="Created container d7bc76ff6212f6ca8616b500608fb6ecc3747f63f1e28961777c576b1600dba0: default/busybox/busybox" id=7081e702-af2d-4e02-afd6-431edf58ca52 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:35:54 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:54.672004167Z" level=info msg="Starting container: d7bc76ff6212f6ca8616b500608fb6ecc3747f63f1e28961777c576b1600dba0" id=422a65d2-0353-4ece-9076-7cea7ccfa440 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:35:54 default-k8s-diff-port-512414 crio[773]: time="2025-12-09T02:35:54.67360704Z" level=info msg="Started container" PID=1943 containerID=d7bc76ff6212f6ca8616b500608fb6ecc3747f63f1e28961777c576b1600dba0 description=default/busybox/busybox id=422a65d2-0353-4ece-9076-7cea7ccfa440 name=/runtime.v1.RuntimeService/StartContainer sandboxID=22369f0f52bf36b15a53cd2f51b9865e4504456f03b88753200e29be39ab7a4d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	d7bc76ff6212f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   22369f0f52bf3       busybox                                                default
	86f4e3ea6b37f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   a674e6ba7525c       coredns-66bc5c9577-gtkkc                               kube-system
	706e5f4a14a3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   d57943e0362b5       storage-provisioner                                    kube-system
	c0c846d3fdc5f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   f71b2cbb056af       kube-proxy-nkdhm                                       kube-system
	52b23ab659062       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   64e90e91dccac       kindnet-5hz5b                                          kube-system
	ea77a2ed8cbbb       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   356daaeede993       etcd-default-k8s-diff-port-512414                      kube-system
	6bb4f22b2e7fc       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   a950136e1d2a6       kube-scheduler-default-k8s-diff-port-512414            kube-system
	a36fc71566949       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   83d87e059e149       kube-apiserver-default-k8s-diff-port-512414            kube-system
	17cda75ce39d7       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   1dd56518a9fef       kube-controller-manager-default-k8s-diff-port-512414   kube-system
	
	
	==> coredns [86f4e3ea6b37fa24953cf17027fd6152320ed56502add56514d5aa021611d8d7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51462 - 8154 "HINFO IN 1338702857531837611.6477040207257414065. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.480923174s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-512414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-512414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=default-k8s-diff-port-512414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_35_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:35:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-512414
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:35:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:35:54 +0000   Tue, 09 Dec 2025 02:35:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:35:54 +0000   Tue, 09 Dec 2025 02:35:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:35:54 +0000   Tue, 09 Dec 2025 02:35:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:35:54 +0000   Tue, 09 Dec 2025 02:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-512414
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                73837a98-9d7d-40ab-bb93-0a67d7e98624
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-gtkkc                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-512414                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-5hz5b                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-512414             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-512414    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-nkdhm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-512414             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-512414 event: Registered Node default-k8s-diff-port-512414 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-512414 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [ea77a2ed8cbbbc5d53572ba5c474dbca3209105385facd72cfc1a1b11dbf7289] <==
	{"level":"warn","ts":"2025-12-09T02:35:30.874173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.882271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.892601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.906437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.913431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.921968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.930541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.938835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.947425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.956292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.973750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.980671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.987388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:30.996185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.004738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.011194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.020073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.027280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.045033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.052701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.059587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.073167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.080841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.089301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:31.141189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46362","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:36:03 up  1:18,  0 user,  load average: 3.16, 2.42, 1.77
	Linux default-k8s-diff-port-512414 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [52b23ab6590625e4c077840f51f1dc4c24b968dfa41dd1ae29c82575f746d0a6] <==
	I1209 02:35:39.985415       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:35:39.985715       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1209 02:35:39.985883       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:35:39.985914       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:35:39.985938       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:35:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:35:40.283759       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:35:40.324715       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:35:40.324827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:35:40.324984       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:35:40.584026       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:35:40.584050       1 metrics.go:72] Registering metrics
	I1209 02:35:40.584109       1 controller.go:711] "Syncing nftables rules"
	I1209 02:35:50.283793       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:35:50.283852       1 main.go:301] handling current node
	I1209 02:36:00.286860       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:36:00.286887       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a36fc715669490b4423de7c07a55cd44e12146427aa2bd66a7785ba9224ddf7b] <==
	E1209 02:35:31.719220       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1209 02:35:31.745481       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:35:31.750882       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:35:31.751332       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1209 02:35:31.756552       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:35:31.757313       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 02:35:31.921704       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:35:32.548742       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1209 02:35:32.552631       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1209 02:35:32.552664       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:35:32.977462       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:35:33.012819       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:35:33.151557       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1209 02:35:33.159157       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1209 02:35:33.160152       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:35:33.165705       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:35:33.571159       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:35:33.960100       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:35:33.969761       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 02:35:33.977113       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 02:35:39.227083       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:35:39.231802       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:35:39.273863       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:35:39.422368       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1209 02:36:01.752474       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:50770: use of closed network connection
	
	
	==> kube-controller-manager [17cda75ce39d75eb03aafd4148b7bb01852f882e1d6eab9aff5f7a7e5af493e8] <==
	I1209 02:35:38.559808       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:35:38.569070       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 02:35:38.569077       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 02:35:38.569135       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1209 02:35:38.569134       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1209 02:35:38.569151       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 02:35:38.569189       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1209 02:35:38.569158       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1209 02:35:38.569275       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 02:35:38.569367       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 02:35:38.569416       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-512414"
	I1209 02:35:38.569483       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1209 02:35:38.570649       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1209 02:35:38.570858       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 02:35:38.570881       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1209 02:35:38.571210       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1209 02:35:38.572409       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1209 02:35:38.573605       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 02:35:38.574797       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1209 02:35:38.578069       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:35:38.578123       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:35:38.578069       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1209 02:35:38.592276       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1209 02:35:38.597597       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:35:53.572038       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c0c846d3fdc5f3e415ddbf72f954f60c75852fd028fb0f923d90d433f4a1717a] <==
	I1209 02:35:39.854517       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:35:39.919615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:35:40.020244       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:35:40.020285       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1209 02:35:40.020377       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:35:40.047362       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:35:40.047421       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:35:40.066417       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:35:40.067318       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:35:40.067702       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:35:40.070332       1 config.go:200] "Starting service config controller"
	I1209 02:35:40.070351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:35:40.070356       1 config.go:309] "Starting node config controller"
	I1209 02:35:40.070363       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:35:40.070369       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:35:40.070373       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:35:40.070485       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:35:40.070585       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:35:40.070617       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:35:40.171205       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:35:40.171240       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:35:40.171274       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6bb4f22b2e7fcb7019626da6ea7b7579034e73aa8d7a0cdfab35be21dc64b564] <==
	E1209 02:35:31.592818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 02:35:31.594626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 02:35:31.594722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 02:35:31.594731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 02:35:31.594838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 02:35:31.594856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 02:35:31.594964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 02:35:31.595002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 02:35:31.595069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 02:35:31.595067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 02:35:31.595089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 02:35:32.402192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 02:35:32.415949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 02:35:32.424852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 02:35:32.467297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1209 02:35:32.498036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 02:35:32.584187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 02:35:32.615845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 02:35:32.617712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 02:35:32.654666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 02:35:32.742320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 02:35:32.765301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 02:35:32.786251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 02:35:32.830048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1209 02:35:35.090615       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 02:35:34 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:34.927357    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-512414" podStartSLOduration=1.9273345819999999 podStartE2EDuration="1.927334582s" podCreationTimestamp="2025-12-09 02:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:34.926834608 +0000 UTC m=+1.188766947" watchObservedRunningTime="2025-12-09 02:35:34.927334582 +0000 UTC m=+1.189266922"
	Dec 09 02:35:34 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:34.938776    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-512414" podStartSLOduration=1.938753541 podStartE2EDuration="1.938753541s" podCreationTimestamp="2025-12-09 02:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:34.938680102 +0000 UTC m=+1.200612441" watchObservedRunningTime="2025-12-09 02:35:34.938753541 +0000 UTC m=+1.200685880"
	Dec 09 02:35:34 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:34.955183    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-512414" podStartSLOduration=1.955165789 podStartE2EDuration="1.955165789s" podCreationTimestamp="2025-12-09 02:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:34.947216012 +0000 UTC m=+1.209148352" watchObservedRunningTime="2025-12-09 02:35:34.955165789 +0000 UTC m=+1.217098128"
	Dec 09 02:35:34 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:34.955275    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-512414" podStartSLOduration=1.955271027 podStartE2EDuration="1.955271027s" podCreationTimestamp="2025-12-09 02:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:34.954979969 +0000 UTC m=+1.216912307" watchObservedRunningTime="2025-12-09 02:35:34.955271027 +0000 UTC m=+1.217203365"
	Dec 09 02:35:38 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:38.560248    1323 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 09 02:35:38 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:38.561053    1323 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 02:35:39 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:39.458126    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeff075a-c1a7-49b1-b3c4-dee45cc405fe-xtables-lock\") pod \"kindnet-5hz5b\" (UID: \"aeff075a-c1a7-49b1-b3c4-dee45cc405fe\") " pod="kube-system/kindnet-5hz5b"
	Dec 09 02:35:39 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:39.458172    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeff075a-c1a7-49b1-b3c4-dee45cc405fe-lib-modules\") pod \"kindnet-5hz5b\" (UID: \"aeff075a-c1a7-49b1-b3c4-dee45cc405fe\") " pod="kube-system/kindnet-5hz5b"
	Dec 09 02:35:39 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:39.458200    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3cad909-51ec-4cd6-b391-d993cf9e18d5-xtables-lock\") pod \"kube-proxy-nkdhm\" (UID: \"b3cad909-51ec-4cd6-b391-d993cf9e18d5\") " pod="kube-system/kube-proxy-nkdhm"
	Dec 09 02:35:39 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:39.458223    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6sgw\" (UniqueName: \"kubernetes.io/projected/aeff075a-c1a7-49b1-b3c4-dee45cc405fe-kube-api-access-c6sgw\") pod \"kindnet-5hz5b\" (UID: \"aeff075a-c1a7-49b1-b3c4-dee45cc405fe\") " pod="kube-system/kindnet-5hz5b"
	Dec 09 02:35:39 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:39.458245    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3cad909-51ec-4cd6-b391-d993cf9e18d5-lib-modules\") pod \"kube-proxy-nkdhm\" (UID: \"b3cad909-51ec-4cd6-b391-d993cf9e18d5\") " pod="kube-system/kube-proxy-nkdhm"
	Dec 09 02:35:39 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:39.458681    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aeff075a-c1a7-49b1-b3c4-dee45cc405fe-cni-cfg\") pod \"kindnet-5hz5b\" (UID: \"aeff075a-c1a7-49b1-b3c4-dee45cc405fe\") " pod="kube-system/kindnet-5hz5b"
	Dec 09 02:35:39 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:39.458732    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3cad909-51ec-4cd6-b391-d993cf9e18d5-kube-proxy\") pod \"kube-proxy-nkdhm\" (UID: \"b3cad909-51ec-4cd6-b391-d993cf9e18d5\") " pod="kube-system/kube-proxy-nkdhm"
	Dec 09 02:35:39 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:39.458768    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkfm\" (UniqueName: \"kubernetes.io/projected/b3cad909-51ec-4cd6-b391-d993cf9e18d5-kube-api-access-5gkfm\") pod \"kube-proxy-nkdhm\" (UID: \"b3cad909-51ec-4cd6-b391-d993cf9e18d5\") " pod="kube-system/kube-proxy-nkdhm"
	Dec 09 02:35:39 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:39.875626    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nkdhm" podStartSLOduration=0.875606382 podStartE2EDuration="875.606382ms" podCreationTimestamp="2025-12-09 02:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:39.875347772 +0000 UTC m=+6.137280110" watchObservedRunningTime="2025-12-09 02:35:39.875606382 +0000 UTC m=+6.137538721"
	Dec 09 02:35:39 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:39.888737    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5hz5b" podStartSLOduration=0.888719716 podStartE2EDuration="888.719716ms" podCreationTimestamp="2025-12-09 02:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:39.888216055 +0000 UTC m=+6.150148396" watchObservedRunningTime="2025-12-09 02:35:39.888719716 +0000 UTC m=+6.150652055"
	Dec 09 02:35:50 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:50.488792    1323 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 09 02:35:50 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:50.528831    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cfbd4aa-8819-4717-b719-d53cce885003-config-volume\") pod \"coredns-66bc5c9577-gtkkc\" (UID: \"9cfbd4aa-8819-4717-b719-d53cce885003\") " pod="kube-system/coredns-66bc5c9577-gtkkc"
	Dec 09 02:35:50 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:50.528864    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrd4\" (UniqueName: \"kubernetes.io/projected/be12b3a9-68f0-4ec5-8dee-5afcf03c12ff-kube-api-access-gkrd4\") pod \"storage-provisioner\" (UID: \"be12b3a9-68f0-4ec5-8dee-5afcf03c12ff\") " pod="kube-system/storage-provisioner"
	Dec 09 02:35:50 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:50.528887    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvht\" (UniqueName: \"kubernetes.io/projected/9cfbd4aa-8819-4717-b719-d53cce885003-kube-api-access-sjvht\") pod \"coredns-66bc5c9577-gtkkc\" (UID: \"9cfbd4aa-8819-4717-b719-d53cce885003\") " pod="kube-system/coredns-66bc5c9577-gtkkc"
	Dec 09 02:35:50 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:50.528914    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/be12b3a9-68f0-4ec5-8dee-5afcf03c12ff-tmp\") pod \"storage-provisioner\" (UID: \"be12b3a9-68f0-4ec5-8dee-5afcf03c12ff\") " pod="kube-system/storage-provisioner"
	Dec 09 02:35:50 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:50.906724    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gtkkc" podStartSLOduration=11.906701851 podStartE2EDuration="11.906701851s" podCreationTimestamp="2025-12-09 02:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:50.906616677 +0000 UTC m=+17.168549016" watchObservedRunningTime="2025-12-09 02:35:50.906701851 +0000 UTC m=+17.168634190"
	Dec 09 02:35:50 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:50.907011    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.906991054 podStartE2EDuration="10.906991054s" podCreationTimestamp="2025-12-09 02:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:50.89652938 +0000 UTC m=+17.158461719" watchObservedRunningTime="2025-12-09 02:35:50.906991054 +0000 UTC m=+17.168923394"
	Dec 09 02:35:53 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:53.748620    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjmsv\" (UniqueName: \"kubernetes.io/projected/ab74c108-2004-4878-a264-225156656ac5-kube-api-access-wjmsv\") pod \"busybox\" (UID: \"ab74c108-2004-4878-a264-225156656ac5\") " pod="default/busybox"
	Dec 09 02:35:54 default-k8s-diff-port-512414 kubelet[1323]: I1209 02:35:54.905562    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.251134433 podStartE2EDuration="1.905539822s" podCreationTimestamp="2025-12-09 02:35:53 +0000 UTC" firstStartedPulling="2025-12-09 02:35:53.984064272 +0000 UTC m=+20.245996602" lastFinishedPulling="2025-12-09 02:35:54.63846967 +0000 UTC m=+20.900401991" observedRunningTime="2025-12-09 02:35:54.905145065 +0000 UTC m=+21.167077404" watchObservedRunningTime="2025-12-09 02:35:54.905539822 +0000 UTC m=+21.167472161"
	
	
	==> storage-provisioner [706e5f4a14a3f0c2b948e79c6c9455030f6f4ed1ce72dbcd893765ec4b9e8b62] <==
	I1209 02:35:50.872426       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:35:50.880069       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:35:50.880126       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:35:50.882010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:50.887160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:35:50.887278       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:35:50.887457       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-512414_6db49870-4a30-4e70-8167-d37ebe3270c2!
	I1209 02:35:50.887603       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2818b0d6-e891-4733-8290-62f4a6a50242", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-512414_6db49870-4a30-4e70-8167-d37ebe3270c2 became leader
	W1209 02:35:50.889502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:50.892819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:35:50.988594       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-512414_6db49870-4a30-4e70-8167-d37ebe3270c2!
	W1209 02:35:52.895074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:52.898684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:54.901220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:54.905479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:56.907983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:56.912575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:58.915318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:58.919211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:00.922702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:00.926889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:02.930429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:02.935113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-512414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (226.607901ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:36:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-185074 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-185074 describe deploy/metrics-server -n kube-system: exit status 1 (55.348743ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-185074 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-185074
helpers_test.go:243: (dbg) docker inspect no-preload-185074:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75",
	        "Created": "2025-12-09T02:35:10.661104017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283867,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:35:10.706603269Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/hostname",
	        "HostsPath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/hosts",
	        "LogPath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75-json.log",
	        "Name": "/no-preload-185074",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-185074:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-185074",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75",
	                "LowerDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-185074",
	                "Source": "/var/lib/docker/volumes/no-preload-185074/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-185074",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-185074",
	                "name.minikube.sigs.k8s.io": "no-preload-185074",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2df2683bf23241941bfa74bdcd328580dfc7c592242d502cf4ccaeba98d96df0",
	            "SandboxKey": "/var/run/docker/netns/2df2683bf232",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-185074": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d42fa8488d6e111dc575a4746973e4e3d2a7c9b8452ce6de734cd48ffe8b1bf7",
	                    "EndpointID": "8f5f8c4360d4fd23ec28422173dcd8a3cd03efdfbb1dc01135fe4106be321a61",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:c6:3a:6e:51:1f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-185074",
	                        "4597603e9b7f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185074 -n no-preload-185074
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-185074 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-185074 logs -n 25: (1.043632085s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-933067 sudo crio config                                                                                                                                                                                                             │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │                     │
	│ delete  │ -p cilium-933067                                                                                                                                                                                                                              │ cilium-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │ 09 Dec 25 02:32 UTC │
	│ start   │ -p cert-expiration-572052 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:32 UTC │ 09 Dec 25 02:33 UTC │
	│ delete  │ -p stopped-upgrade-768415                                                                                                                                                                                                                     │ stopped-upgrade-768415       │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ start   │ -p force-systemd-flag-598501 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-598501    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ ssh     │ force-systemd-flag-598501 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-598501    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ delete  │ -p force-systemd-flag-598501                                                                                                                                                                                                                  │ force-systemd-flag-598501    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:34 UTC │
	│ start   │ -p cert-options-465214 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │                     │
	│ start   │ -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p kubernetes-upgrade-190944                                                                                                                                                                                                                  │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ ssh     │ cert-options-465214 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ ssh     │ -p cert-options-465214 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p cert-options-465214                                                                                                                                                                                                                        │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p running-upgrade-099378                                                                                                                                                                                                                     │ running-upgrade-099378       │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-126117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-512414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p cert-expiration-572052 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p old-k8s-version-126117 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-512414 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ delete  │ -p cert-expiration-572052                                                                                                                                                                                                                     │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:36:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:36:01.895609  292942 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:36:01.895878  292942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:01.895882  292942 out.go:374] Setting ErrFile to fd 2...
	I1209 02:36:01.895885  292942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:01.896089  292942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:36:01.896482  292942 out.go:368] Setting JSON to false
	I1209 02:36:01.897762  292942 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4711,"bootTime":1765243051,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:36:01.897810  292942 start.go:143] virtualization: kvm guest
	I1209 02:36:01.899831  292942 out.go:179] * [cert-expiration-572052] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:36:01.900909  292942 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:36:01.900947  292942 notify.go:221] Checking for updates...
	I1209 02:36:01.903339  292942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:36:01.904729  292942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:01.905891  292942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:36:01.907604  292942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:36:01.909053  292942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:36:01.911032  292942 config.go:182] Loaded profile config "cert-expiration-572052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:36:01.911876  292942 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:36:01.938767  292942 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:36:01.938926  292942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:02.004755  292942 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-09 02:36:01.993912245 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:02.004912  292942 docker.go:319] overlay module found
	I1209 02:36:02.006751  292942 out.go:179] * Using the docker driver based on existing profile
	I1209 02:36:02.007852  292942 start.go:309] selected driver: docker
	I1209 02:36:02.007862  292942 start.go:927] validating driver "docker" against &{Name:cert-expiration-572052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-572052 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:02.007962  292942 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:36:02.008738  292942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:02.076288  292942 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-09 02:36:02.066049329 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:02.076583  292942 cni.go:84] Creating CNI manager for ""
	I1209 02:36:02.076667  292942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:36:02.076717  292942 start.go:353] cluster config:
	{Name:cert-expiration-572052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-572052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:02.077985  292942 out.go:179] * Starting "cert-expiration-572052" primary control-plane node in "cert-expiration-572052" cluster
	I1209 02:36:02.078927  292942 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:36:02.079908  292942 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:36:02.080982  292942 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:36:02.081005  292942 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:36:02.081013  292942 cache.go:65] Caching tarball of preloaded images
	I1209 02:36:02.081074  292942 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:36:02.081071  292942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:36:02.081090  292942 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:36:02.081167  292942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/config.json ...
	I1209 02:36:02.105456  292942 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:36:02.105469  292942 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:36:02.105488  292942 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:36:02.105523  292942 start.go:360] acquireMachinesLock for cert-expiration-572052: {Name:mke7bd2ad125f2d9e8ba50be09e124c4335ae276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:02.105606  292942 start.go:364] duration metric: took 65.933µs to acquireMachinesLock for "cert-expiration-572052"
	I1209 02:36:02.105621  292942 start.go:96] Skipping create...Using existing machine configuration
	I1209 02:36:02.105626  292942 fix.go:54] fixHost starting: 
	I1209 02:36:02.105933  292942 cli_runner.go:164] Run: docker container inspect cert-expiration-572052 --format={{.State.Status}}
	I1209 02:36:02.125965  292942 fix.go:112] recreateIfNeeded on cert-expiration-572052: state=Running err=<nil>
	W1209 02:36:02.126020  292942 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 02:36:02.130226  292942 out.go:252] * Updating the running docker "cert-expiration-572052" container ...
	I1209 02:36:02.130253  292942 machine.go:94] provisionDockerMachine start ...
	I1209 02:36:02.130336  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:02.152537  292942 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:02.152799  292942 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1209 02:36:02.152806  292942 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:36:02.287529  292942 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-572052
	
	I1209 02:36:02.287547  292942 ubuntu.go:182] provisioning hostname "cert-expiration-572052"
	I1209 02:36:02.287615  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:02.311879  292942 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:02.312217  292942 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1209 02:36:02.312232  292942 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-572052 && echo "cert-expiration-572052" | sudo tee /etc/hostname
	I1209 02:36:02.465054  292942 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-572052
	
	I1209 02:36:02.465124  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:02.488195  292942 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:02.488379  292942 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1209 02:36:02.488389  292942 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-572052' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-572052/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-572052' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:36:02.625557  292942 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:36:02.625576  292942 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:36:02.625597  292942 ubuntu.go:190] setting up certificates
	I1209 02:36:02.625626  292942 provision.go:84] configureAuth start
	I1209 02:36:02.625696  292942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-572052
	I1209 02:36:02.645241  292942 provision.go:143] copyHostCerts
	I1209 02:36:02.645307  292942 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:36:02.645317  292942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:36:02.645401  292942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:36:02.645514  292942 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:36:02.645521  292942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:36:02.645562  292942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:36:02.645654  292942 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:36:02.645660  292942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:36:02.645698  292942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:36:02.645777  292942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-572052 san=[127.0.0.1 192.168.94.2 cert-expiration-572052 localhost minikube]
	I1209 02:36:02.763738  292942 provision.go:177] copyRemoteCerts
	I1209 02:36:02.763788  292942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:36:02.763842  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:02.784195  292942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/cert-expiration-572052/id_rsa Username:docker}
	I1209 02:36:02.883448  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:36:02.904307  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 02:36:02.923501  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 02:36:02.944790  292942 provision.go:87] duration metric: took 319.153397ms to configureAuth
	I1209 02:36:02.944810  292942 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:36:02.944990  292942 config.go:182] Loaded profile config "cert-expiration-572052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:36:02.945189  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:02.965359  292942 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:02.965685  292942 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1209 02:36:02.965991  292942 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:36:03.335510  292942 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:36:03.335525  292942 machine.go:97] duration metric: took 1.205267601s to provisionDockerMachine
	I1209 02:36:03.335536  292942 start.go:293] postStartSetup for "cert-expiration-572052" (driver="docker")
	I1209 02:36:03.335546  292942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:36:03.335608  292942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:36:03.335673  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:03.354665  292942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/cert-expiration-572052/id_rsa Username:docker}
	I1209 02:36:03.448891  292942 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:36:03.452497  292942 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:36:03.452518  292942 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:36:03.452528  292942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:36:03.452586  292942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:36:03.452685  292942 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:36:03.452790  292942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:36:03.461136  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:36:03.478716  292942 start.go:296] duration metric: took 143.169464ms for postStartSetup
	I1209 02:36:03.478813  292942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:36:03.478857  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:03.498136  292942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/cert-expiration-572052/id_rsa Username:docker}
	I1209 02:36:03.589307  292942 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:36:03.594244  292942 fix.go:56] duration metric: took 1.488614188s for fixHost
	I1209 02:36:03.594259  292942 start.go:83] releasing machines lock for "cert-expiration-572052", held for 1.48864512s
	I1209 02:36:03.594321  292942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-572052
	I1209 02:36:03.613760  292942 ssh_runner.go:195] Run: cat /version.json
	I1209 02:36:03.613805  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:03.613820  292942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:36:03.613884  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:03.635438  292942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/cert-expiration-572052/id_rsa Username:docker}
	I1209 02:36:03.636344  292942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/cert-expiration-572052/id_rsa Username:docker}
	I1209 02:36:03.804158  292942 ssh_runner.go:195] Run: systemctl --version
	I1209 02:36:03.810789  292942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:36:03.849555  292942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:36:03.855096  292942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:36:03.855148  292942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:36:03.863599  292942 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 02:36:03.863613  292942 start.go:496] detecting cgroup driver to use...
	I1209 02:36:03.863654  292942 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:36:03.863702  292942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:36:03.878325  292942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:36:03.890859  292942 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:36:03.890891  292942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:36:03.905988  292942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:36:03.919227  292942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:36:04.057227  292942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:36:04.179242  292942 docker.go:234] disabling docker service ...
	I1209 02:36:04.179297  292942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:36:04.200722  292942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:36:04.225157  292942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:36:04.358174  292942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:36:04.489746  292942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:36:04.502339  292942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:36:04.516523  292942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:36:04.516573  292942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:04.526068  292942 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:36:04.526153  292942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:04.535248  292942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:04.543934  292942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:04.552071  292942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:36:04.559486  292942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:04.567600  292942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:04.575355  292942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:04.583423  292942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:36:04.590419  292942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:36:04.597155  292942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:04.705761  292942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:36:04.879106  292942 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:36:04.879155  292942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:36:04.883065  292942 start.go:564] Will wait 60s for crictl version
	I1209 02:36:04.883108  292942 ssh_runner.go:195] Run: which crictl
	I1209 02:36:04.886587  292942 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:36:04.909517  292942 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:36:04.909575  292942 ssh_runner.go:195] Run: crio --version
	I1209 02:36:04.935866  292942 ssh_runner.go:195] Run: crio --version
	I1209 02:36:04.962817  292942 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 02:36:04.964191  292942 cli_runner.go:164] Run: docker network inspect cert-expiration-572052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:36:04.982024  292942 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1209 02:36:04.986086  292942 kubeadm.go:884] updating cluster {Name:cert-expiration-572052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-572052 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:36:04.986168  292942 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:36:04.986203  292942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:36:05.017545  292942 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:36:05.017555  292942 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:36:05.017593  292942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:36:05.041146  292942 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:36:05.041156  292942 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:36:05.041162  292942 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1209 02:36:05.041260  292942 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-572052 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-572052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:36:05.041314  292942 ssh_runner.go:195] Run: crio config
	I1209 02:36:05.085769  292942 cni.go:84] Creating CNI manager for ""
	I1209 02:36:05.085778  292942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:36:05.085789  292942 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:36:05.085808  292942 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-572052 NodeName:cert-expiration-572052 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:36:05.085926  292942 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-572052"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:36:05.085975  292942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 02:36:05.093817  292942 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:36:05.093873  292942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:36:05.101078  292942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1209 02:36:05.112941  292942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:36:05.124346  292942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1209 02:36:05.135953  292942 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:36:05.139286  292942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:05.255914  292942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:36:05.269276  292942 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052 for IP: 192.168.94.2
	I1209 02:36:05.269288  292942 certs.go:195] generating shared ca certs ...
	I1209 02:36:05.269304  292942 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:05.269446  292942 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:36:05.269485  292942 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:36:05.269494  292942 certs.go:257] generating profile certs ...
	W1209 02:36:05.269693  292942 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1209 02:36:05.269714  292942 certs.go:629] cert expired /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/client.crt: expiration: 2025-12-09 02:35:48 +0000 UTC, now: 2025-12-09 02:36:05.269708911 +0000 UTC m=+3.434219472
	I1209 02:36:05.269836  292942 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/client.key
	I1209 02:36:05.269866  292942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/client.crt with IP's: []
	I1209 02:36:05.329406  292942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/client.crt ...
	I1209 02:36:05.329422  292942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/client.crt: {Name:mke83e8cf8a88a0f28acd71390d61e1649f3928a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:05.329543  292942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/client.key ...
	I1209 02:36:05.329552  292942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/client.key: {Name:mkfa14790482fb20b46ced1c69f28729a704946a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1209 02:36:05.329734  292942 out.go:285] ! Certificate apiserver.crt.010860e2 has expired. Generating a new one...
	I1209 02:36:05.329751  292942 certs.go:629] cert expired /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.crt.010860e2: expiration: 2025-12-09 02:35:48 +0000 UTC, now: 2025-12-09 02:36:05.329746036 +0000 UTC m=+3.494256588
	I1209 02:36:05.329818  292942 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.key.010860e2
	I1209 02:36:05.329830  292942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.crt.010860e2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1209 02:36:05.507649  292942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.crt.010860e2 ...
	I1209 02:36:05.507668  292942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.crt.010860e2: {Name:mk3e8a27e1bc8d60eddbdf50b99bc0aeb4fc8d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:05.507813  292942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.key.010860e2 ...
	I1209 02:36:05.507824  292942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.key.010860e2: {Name:mk1d0234811df43673ad6c5c72142bd0a0d0d19e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:05.507915  292942 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.crt.010860e2 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.crt
	I1209 02:36:05.508097  292942 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.key.010860e2 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.key
	W1209 02:36:05.508300  292942 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1209 02:36:05.508320  292942 certs.go:629] cert expired /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/proxy-client.crt: expiration: 2025-12-09 02:35:48 +0000 UTC, now: 2025-12-09 02:36:05.508314572 +0000 UTC m=+3.672825136
	I1209 02:36:05.508396  292942 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/proxy-client.key
	I1209 02:36:05.508412  292942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/proxy-client.crt with IP's: []
	I1209 02:36:05.544348  292942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/proxy-client.crt ...
	I1209 02:36:05.544365  292942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/proxy-client.crt: {Name:mk0dfb5e389de63b9a16b1e6e1fda5e97e1a3a5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:05.544504  292942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/proxy-client.key ...
	I1209 02:36:05.544514  292942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/proxy-client.key: {Name:mkadce7fd73f382d9998d9759c8e789707558e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:05.544705  292942 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:36:05.544738  292942 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:36:05.544745  292942 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:36:05.544766  292942 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:36:05.544786  292942 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:36:05.544806  292942 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:36:05.544848  292942 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:36:05.545411  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:36:05.564021  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:36:05.580789  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:36:05.597297  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:36:05.613434  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 02:36:05.629922  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:36:05.647022  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:36:05.663169  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/cert-expiration-572052/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:36:05.679045  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:36:05.695616  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:36:05.712900  292942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:36:05.729102  292942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:36:05.741029  292942 ssh_runner.go:195] Run: openssl version
	I1209 02:36:05.746593  292942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:36:05.753285  292942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:36:05.760854  292942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:36:05.764105  292942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:36:05.764133  292942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:36:05.798061  292942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:36:05.804836  292942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:05.811487  292942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:36:05.818817  292942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:05.822204  292942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:05.822236  292942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:05.855663  292942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:36:05.862408  292942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:36:05.869242  292942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:36:05.876006  292942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:36:05.879307  292942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:36:05.879342  292942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:36:05.914794  292942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:36:05.921499  292942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:36:05.925224  292942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 02:36:05.958763  292942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 02:36:05.994023  292942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 02:36:06.027508  292942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 02:36:06.061271  292942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 02:36:06.095227  292942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 02:36:06.128369  292942 kubeadm.go:401] StartCluster: {Name:cert-expiration-572052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-572052 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:06.128451  292942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:36:06.128491  292942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:36:06.154231  292942 cri.go:89] found id: "da3c887660d02df8c23ab88176246b4ae0a1f59b488dead8793f8724979be8b6"
	I1209 02:36:06.154244  292942 cri.go:89] found id: "e3996d373000bfda4780e3a5b86ed7eee6a741f4982faa9aee30265b070cc819"
	I1209 02:36:06.154249  292942 cri.go:89] found id: "a69552b9f64bc656b04a5cb9e37a4b31075e98ee50f17fcb1630f4591fcf5561"
	I1209 02:36:06.154252  292942 cri.go:89] found id: "8d162d79315515827f3ace4b8d682270163f64a97a1fcfcc2c79db10c894cf09"
	I1209 02:36:06.154255  292942 cri.go:89] found id: "6202a811bc7b3e07307ff981dbdbee6eaebbb9994a0ccf952c7513497b3d90b2"
	I1209 02:36:06.154258  292942 cri.go:89] found id: "f65e83ace79efa69b77597c4fa7b851e276a715e851715b70067f6dc2ba2dd9d"
	I1209 02:36:06.154261  292942 cri.go:89] found id: "3b6c00a613822f8f0ee99c92f321a892c91a33fed707183a8dded2990eee91a9"
	I1209 02:36:06.154263  292942 cri.go:89] found id: "2bccbdc0abb9cb9d300b5edff76bb690d2354bcc3435c2f07f8c2a6380ca4c71"
	I1209 02:36:06.154266  292942 cri.go:89] found id: ""
	I1209 02:36:06.154304  292942 ssh_runner.go:195] Run: sudo runc list -f json
	W1209 02:36:06.165156  292942 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:36:06Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:36:06.165194  292942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:36:06.172698  292942 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 02:36:06.172705  292942 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 02:36:06.172736  292942 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 02:36:06.179461  292942 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:36:06.180208  292942 kubeconfig.go:125] found "cert-expiration-572052" server: "https://192.168.94.2:8443"
	I1209 02:36:06.181903  292942 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 02:36:06.189377  292942 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1209 02:36:06.189397  292942 kubeadm.go:602] duration metric: took 16.68716ms to restartPrimaryControlPlane
	I1209 02:36:06.189405  292942 kubeadm.go:403] duration metric: took 61.043455ms to StartCluster
	I1209 02:36:06.189418  292942 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:06.189479  292942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:06.190828  292942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:06.191082  292942 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:36:06.191141  292942 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 02:36:06.191242  292942 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-572052"
	I1209 02:36:06.191266  292942 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-572052"
	W1209 02:36:06.191273  292942 addons.go:248] addon storage-provisioner should already be in state true
	I1209 02:36:06.191299  292942 host.go:66] Checking if "cert-expiration-572052" exists ...
	I1209 02:36:06.191394  292942 config.go:182] Loaded profile config "cert-expiration-572052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:36:06.191435  292942 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-572052"
	I1209 02:36:06.191458  292942 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-572052"
	I1209 02:36:06.191787  292942 cli_runner.go:164] Run: docker container inspect cert-expiration-572052 --format={{.State.Status}}
	I1209 02:36:06.191790  292942 cli_runner.go:164] Run: docker container inspect cert-expiration-572052 --format={{.State.Status}}
	I1209 02:36:06.193554  292942 out.go:179] * Verifying Kubernetes components...
	I1209 02:36:06.194916  292942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:06.215465  292942 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-572052"
	W1209 02:36:06.215474  292942 addons.go:248] addon default-storageclass should already be in state true
	I1209 02:36:06.215493  292942 host.go:66] Checking if "cert-expiration-572052" exists ...
	I1209 02:36:06.215857  292942 cli_runner.go:164] Run: docker container inspect cert-expiration-572052 --format={{.State.Status}}
	I1209 02:36:06.216562  292942 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:36:06.217803  292942 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:36:06.217812  292942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 02:36:06.217861  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:06.243775  292942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/cert-expiration-572052/id_rsa Username:docker}
	I1209 02:36:06.244152  292942 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 02:36:06.244165  292942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 02:36:06.244217  292942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-572052
	I1209 02:36:06.264100  292942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/cert-expiration-572052/id_rsa Username:docker}
	I1209 02:36:06.319101  292942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:36:06.332525  292942 api_server.go:52] waiting for apiserver process to appear ...
	I1209 02:36:06.332574  292942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:36:06.343212  292942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:36:06.343814  292942 api_server.go:72] duration metric: took 152.702797ms to wait for apiserver process to appear ...
	I1209 02:36:06.343827  292942 api_server.go:88] waiting for apiserver healthz status ...
	I1209 02:36:06.343845  292942 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:06.348068  292942 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1209 02:36:06.354272  292942 api_server.go:141] control plane version: v1.34.2
	I1209 02:36:06.354285  292942 api_server.go:131] duration metric: took 10.453444ms to wait for apiserver health ...
	I1209 02:36:06.354292  292942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 02:36:06.357950  292942 system_pods.go:59] 8 kube-system pods found
	I1209 02:36:06.357963  292942 system_pods.go:61] "coredns-66bc5c9577-vm2m9" [5d78aa98-45a1-413f-a108-c4217c64c8f8] Running
	I1209 02:36:06.357967  292942 system_pods.go:61] "etcd-cert-expiration-572052" [ab4534d5-d770-4b18-a63f-6e18c7b0f8c3] Running
	I1209 02:36:06.357969  292942 system_pods.go:61] "kindnet-ttpgk" [fa4cbdd6-728d-4494-8a65-14e4c63cd7d0] Running
	I1209 02:36:06.357972  292942 system_pods.go:61] "kube-apiserver-cert-expiration-572052" [e24b5f7e-fdec-42ab-8ae6-fcdecb2e7a24] Running
	I1209 02:36:06.357974  292942 system_pods.go:61] "kube-controller-manager-cert-expiration-572052" [e9421625-2598-40d1-bd48-27a2041e9598] Running
	I1209 02:36:06.357976  292942 system_pods.go:61] "kube-proxy-b6lfs" [0852c833-b298-41f1-b626-66f24671a0fc] Running
	I1209 02:36:06.357978  292942 system_pods.go:61] "kube-scheduler-cert-expiration-572052" [cabc6f61-9246-4fbb-95b4-2acdf657ba7d] Running
	I1209 02:36:06.357980  292942 system_pods.go:61] "storage-provisioner" [7647afb5-bff4-48e5-b8f0-1dd22ff9f963] Running
	I1209 02:36:06.357984  292942 system_pods.go:74] duration metric: took 3.688424ms to wait for pod list to return data ...
	I1209 02:36:06.357992  292942 kubeadm.go:587] duration metric: took 166.883791ms to wait for: map[apiserver:true system_pods:true]
	I1209 02:36:06.358000  292942 node_conditions.go:102] verifying NodePressure condition ...
	I1209 02:36:06.360213  292942 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 02:36:06.360228  292942 node_conditions.go:123] node cpu capacity is 8
	I1209 02:36:06.360241  292942 node_conditions.go:105] duration metric: took 2.237431ms to run NodePressure ...
	I1209 02:36:06.360253  292942 start.go:242] waiting for startup goroutines ...
	I1209 02:36:06.363088  292942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 02:36:06.824919  292942 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1209 02:36:06.825960  292942 addons.go:530] duration metric: took 634.826534ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 02:36:06.825984  292942 start.go:247] waiting for cluster config update ...
	I1209 02:36:06.825993  292942 start.go:256] writing updated cluster config ...
	I1209 02:36:06.826190  292942 ssh_runner.go:195] Run: rm -f paused
	I1209 02:36:06.872403  292942 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 02:36:06.873953  292942 out.go:179] * Done! kubectl is now configured to use "cert-expiration-572052" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 02:35:57 no-preload-185074 crio[766]: time="2025-12-09T02:35:57.157242475Z" level=info msg="Starting container: ec73fccd39deb47f14878e344a5e41258e5b3306798ecce6c301e283847e1d1a" id=1368a656-5ed0-427c-a21b-562b7f567f4c name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:35:57 no-preload-185074 crio[766]: time="2025-12-09T02:35:57.159225224Z" level=info msg="Started container" PID=2797 containerID=ec73fccd39deb47f14878e344a5e41258e5b3306798ecce6c301e283847e1d1a description=kube-system/coredns-7d764666f9-m6tbs/coredns id=1368a656-5ed0-427c-a21b-562b7f567f4c name=/runtime.v1.RuntimeService/StartContainer sandboxID=cf21e8077e6b2eacc0ee753f38d6de374fac5560527c612ce0244ab687c53383
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.947405462Z" level=info msg="Running pod sandbox: default/busybox/POD" id=feaecf8e-32fd-4203-94b9-d99fed94c4b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.947476748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.952025432Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b933b1d58e78848eb8ea11ffd77bc90cd7fd409915899c08117ec0bf2f2d033f UID:e17362a9-2cc3-4357-81a8-d1ec477fcb7f NetNS:/var/run/netns/bc5abfe3-dd36-487a-9cab-d55c7cde2705 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009843f0}] Aliases:map[]}"
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.952051484Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.961802166Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b933b1d58e78848eb8ea11ffd77bc90cd7fd409915899c08117ec0bf2f2d033f UID:e17362a9-2cc3-4357-81a8-d1ec477fcb7f NetNS:/var/run/netns/bc5abfe3-dd36-487a-9cab-d55c7cde2705 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009843f0}] Aliases:map[]}"
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.961924648Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.962616621Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.963489907Z" level=info msg="Ran pod sandbox b933b1d58e78848eb8ea11ffd77bc90cd7fd409915899c08117ec0bf2f2d033f with infra container: default/busybox/POD" id=feaecf8e-32fd-4203-94b9-d99fed94c4b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.964618029Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f2ed2875-6cd7-4972-99b7-a9573581bd08 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.964749252Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f2ed2875-6cd7-4972-99b7-a9573581bd08 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.96479674Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f2ed2875-6cd7-4972-99b7-a9573581bd08 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.96550277Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4645c563-8255-4ac5-854c-9d08cf40baf8 name=/runtime.v1.ImageService/PullImage
	Dec 09 02:35:59 no-preload-185074 crio[766]: time="2025-12-09T02:35:59.966901729Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 09 02:36:00 no-preload-185074 crio[766]: time="2025-12-09T02:36:00.678470097Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4645c563-8255-4ac5-854c-9d08cf40baf8 name=/runtime.v1.ImageService/PullImage
	Dec 09 02:36:00 no-preload-185074 crio[766]: time="2025-12-09T02:36:00.67914272Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e2a85273-b52c-40cc-a05e-d59cbf2810e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:00 no-preload-185074 crio[766]: time="2025-12-09T02:36:00.680727871Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1e0b47e2-431e-47aa-8a21-e4a95a6497e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:00 no-preload-185074 crio[766]: time="2025-12-09T02:36:00.683756107Z" level=info msg="Creating container: default/busybox/busybox" id=5193ba2c-ece5-4a22-96e7-b625b6a79316 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:00 no-preload-185074 crio[766]: time="2025-12-09T02:36:00.683869175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:00 no-preload-185074 crio[766]: time="2025-12-09T02:36:00.687701086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:00 no-preload-185074 crio[766]: time="2025-12-09T02:36:00.688199636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:00 no-preload-185074 crio[766]: time="2025-12-09T02:36:00.719749023Z" level=info msg="Created container d2be35289847d3c025a1711818eb85a94a18d8007cd3e4f14c0a6b044b24b773: default/busybox/busybox" id=5193ba2c-ece5-4a22-96e7-b625b6a79316 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:00 no-preload-185074 crio[766]: time="2025-12-09T02:36:00.720295027Z" level=info msg="Starting container: d2be35289847d3c025a1711818eb85a94a18d8007cd3e4f14c0a6b044b24b773" id=0a3f729f-e0a1-4702-8c93-83c7b95d914f name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:00 no-preload-185074 crio[766]: time="2025-12-09T02:36:00.722117803Z" level=info msg="Started container" PID=2875 containerID=d2be35289847d3c025a1711818eb85a94a18d8007cd3e4f14c0a6b044b24b773 description=default/busybox/busybox id=0a3f729f-e0a1-4702-8c93-83c7b95d914f name=/runtime.v1.RuntimeService/StartContainer sandboxID=b933b1d58e78848eb8ea11ffd77bc90cd7fd409915899c08117ec0bf2f2d033f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d2be35289847d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   b933b1d58e788       busybox                                     default
	ec73fccd39deb       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   cf21e8077e6b2       coredns-7d764666f9-m6tbs                    kube-system
	5f11b9c43fdb5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   be4c50b8f1b9a       storage-provisioner                         kube-system
	45bbf33fac66a       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   3b0843eb1a203       kindnet-pflxj                               kube-system
	00381101a281b       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      25 seconds ago      Running             kube-proxy                0                   8fe3c5d943a2c       kube-proxy-8jh88                            kube-system
	608d668ce5934       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   5bf8e48ae7ba4       etcd-no-preload-185074                      kube-system
	491ceec7a2993       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   d6838b8eff083       kube-scheduler-no-preload-185074            kube-system
	59e6368e20107       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   2a3a05558fc2e       kube-apiserver-no-preload-185074            kube-system
	5309f7a8841b5       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   2890aa146a2f5       kube-controller-manager-no-preload-185074   kube-system
	
	
	==> coredns [ec73fccd39deb47f14878e344a5e41258e5b3306798ecce6c301e283847e1d1a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45999 - 22704 "HINFO IN 6480111824666272161.3559296659676403840. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.858732344s
	
	
	==> describe nodes <==
	Name:               no-preload-185074
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-185074
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=no-preload-185074
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_35_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:35:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-185074
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:36:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:36:08 +0000   Tue, 09 Dec 2025 02:35:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:36:08 +0000   Tue, 09 Dec 2025 02:35:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:36:08 +0000   Tue, 09 Dec 2025 02:35:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:36:08 +0000   Tue, 09 Dec 2025 02:35:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-185074
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                bea297a5-f68c-4ca1-862a-f85a9f2be474
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-m6tbs                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-185074                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-pflxj                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-185074             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-185074    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-8jh88                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-185074             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node no-preload-185074 event: Registered Node no-preload-185074 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [608d668ce5934ead43341c44a67ef2c7ad25e049a000a05bc285cdb0f8c279ee] <==
	{"level":"warn","ts":"2025-12-09T02:35:34.683169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.690564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.699715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.707656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.718082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.723316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.729986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.736309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.742587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.749138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.757783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.763894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.771812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.786449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.793661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.800919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.808396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.816603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.825131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.832600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.845958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.854538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.863793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.876601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:35:34.957056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57516","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:36:09 up  1:18,  0 user,  load average: 2.99, 2.40, 1.77
	Linux no-preload-185074 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [45bbf33fac66ae586df2bb07b5f5dd23e5512589f1ec3ca7bdf3d876237815fb] <==
	I1209 02:35:46.249869       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:35:46.250175       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1209 02:35:46.250294       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:35:46.250309       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:35:46.250328       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:35:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:35:46.545281       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:35:46.545630       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:35:46.545657       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:35:46.545837       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:35:46.945978       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:35:46.946005       1 metrics.go:72] Registering metrics
	I1209 02:35:46.946075       1 controller.go:711] "Syncing nftables rules"
	I1209 02:35:56.455498       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:35:56.455555       1 main.go:301] handling current node
	I1209 02:36:06.456105       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:36:06.456140       1 main.go:301] handling current node
	
	
	==> kube-apiserver [59e6368e20107e61a35e99d1c886df5f673a6d51440f4d5171e5189f4c5bc3a7] <==
	I1209 02:35:35.501710       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:35.501728       1 policy_source.go:248] refreshing policies
	I1209 02:35:35.543467       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:35:35.555271       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:35:35.555401       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1209 02:35:35.560247       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:35:35.666412       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:35:36.345194       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1209 02:35:36.349014       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1209 02:35:36.349034       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1209 02:35:36.784071       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:35:36.824804       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:35:36.952241       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1209 02:35:36.957878       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1209 02:35:36.959020       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:35:36.963067       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:35:37.384908       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:35:37.802855       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:35:37.811454       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 02:35:37.819314       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 02:35:42.835674       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:35:42.839901       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:35:43.233563       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:35:43.428054       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1209 02:36:07.725822       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:46982: use of closed network connection
	
	
	==> kube-controller-manager [5309f7a8841b59f61bb38d4baa3a3788faaac96e7bfb4feac239e0748b7d91f2] <==
	I1209 02:35:42.189260       1 range_allocator.go:177] "Sending events to api server"
	I1209 02:35:42.189304       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1209 02:35:42.189311       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:35:42.189321       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.189321       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.189863       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.189869       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.189994       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1209 02:35:42.189878       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.189869       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.189882       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.189889       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.189890       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.189904       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.190066       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-185074"
	I1209 02:35:42.190711       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1209 02:35:42.189880       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.196907       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:35:42.197088       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.199767       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-185074" podCIDRs=["10.244.0.0/24"]
	I1209 02:35:42.289778       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:42.289799       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:35:42.289805       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1209 02:35:42.297457       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:57.192946       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [00381101a281bc990b7c2be99fce1fe29dd64c163c0b3ae0966fdc06de0b63c9] <==
	I1209 02:35:43.852719       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:35:43.929348       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:35:44.029469       1 shared_informer.go:377] "Caches are synced"
	I1209 02:35:44.029501       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1209 02:35:44.029587       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:35:44.047242       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:35:44.047296       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:35:44.052346       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:35:44.052722       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:35:44.052743       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:35:44.053989       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:35:44.054011       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:35:44.054130       1 config.go:200] "Starting service config controller"
	I1209 02:35:44.054143       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:35:44.054154       1 config.go:309] "Starting node config controller"
	I1209 02:35:44.054160       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:35:44.054180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:35:44.054189       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:35:44.054181       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:35:44.154170       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:35:44.154248       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:35:44.154253       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [491ceec7a29930b4d066f76edcd5a2857ad7e188735a3f59046748ff99d5c271] <==
	E1209 02:35:35.425930       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1209 02:35:35.426077       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1209 02:35:35.426326       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1209 02:35:35.426343       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1209 02:35:35.426330       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1209 02:35:35.426372       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1209 02:35:36.232278       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1209 02:35:36.233284       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1209 02:35:36.272620       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1209 02:35:36.273798       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1209 02:35:36.341422       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1209 02:35:36.342432       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1209 02:35:36.352523       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1209 02:35:36.353569       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1209 02:35:36.408696       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1209 02:35:36.409722       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1209 02:35:36.487290       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:35:36.488299       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1209 02:35:36.514295       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:35:36.515275       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1209 02:35:36.529326       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:35:36.530460       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1209 02:35:36.659311       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1209 02:35:36.660278       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1209 02:35:39.517006       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 09 02:35:43 no-preload-185074 kubelet[2181]: I1209 02:35:43.500701    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzh8\" (UniqueName: \"kubernetes.io/projected/f8108d3b-c4c6-41e0-81a1-d6acff22e510-kube-api-access-7gzh8\") pod \"kube-proxy-8jh88\" (UID: \"f8108d3b-c4c6-41e0-81a1-d6acff22e510\") " pod="kube-system/kube-proxy-8jh88"
	Dec 09 02:35:43 no-preload-185074 kubelet[2181]: I1209 02:35:43.500748    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8108d3b-c4c6-41e0-81a1-d6acff22e510-xtables-lock\") pod \"kube-proxy-8jh88\" (UID: \"f8108d3b-c4c6-41e0-81a1-d6acff22e510\") " pod="kube-system/kube-proxy-8jh88"
	Dec 09 02:35:43 no-preload-185074 kubelet[2181]: I1209 02:35:43.500840    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/712b93ed-2f9a-4e6b-a402-8e7349db1b72-lib-modules\") pod \"kindnet-pflxj\" (UID: \"712b93ed-2f9a-4e6b-a402-8e7349db1b72\") " pod="kube-system/kindnet-pflxj"
	Dec 09 02:35:43 no-preload-185074 kubelet[2181]: I1209 02:35:43.500973    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbsf\" (UniqueName: \"kubernetes.io/projected/712b93ed-2f9a-4e6b-a402-8e7349db1b72-kube-api-access-zcbsf\") pod \"kindnet-pflxj\" (UID: \"712b93ed-2f9a-4e6b-a402-8e7349db1b72\") " pod="kube-system/kindnet-pflxj"
	Dec 09 02:35:43 no-preload-185074 kubelet[2181]: I1209 02:35:43.501042    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8108d3b-c4c6-41e0-81a1-d6acff22e510-kube-proxy\") pod \"kube-proxy-8jh88\" (UID: \"f8108d3b-c4c6-41e0-81a1-d6acff22e510\") " pod="kube-system/kube-proxy-8jh88"
	Dec 09 02:35:43 no-preload-185074 kubelet[2181]: I1209 02:35:43.501120    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/712b93ed-2f9a-4e6b-a402-8e7349db1b72-cni-cfg\") pod \"kindnet-pflxj\" (UID: \"712b93ed-2f9a-4e6b-a402-8e7349db1b72\") " pod="kube-system/kindnet-pflxj"
	Dec 09 02:35:43 no-preload-185074 kubelet[2181]: I1209 02:35:43.501148    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/712b93ed-2f9a-4e6b-a402-8e7349db1b72-xtables-lock\") pod \"kindnet-pflxj\" (UID: \"712b93ed-2f9a-4e6b-a402-8e7349db1b72\") " pod="kube-system/kindnet-pflxj"
	Dec 09 02:35:44 no-preload-185074 kubelet[2181]: I1209 02:35:44.721569    2181 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-8jh88" podStartSLOduration=1.721553443 podStartE2EDuration="1.721553443s" podCreationTimestamp="2025-12-09 02:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:44.721443294 +0000 UTC m=+7.146556759" watchObservedRunningTime="2025-12-09 02:35:44.721553443 +0000 UTC m=+7.146666907"
	Dec 09 02:35:46 no-preload-185074 kubelet[2181]: I1209 02:35:46.726268    2181 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-pflxj" podStartSLOduration=1.446528012 podStartE2EDuration="3.72623723s" podCreationTimestamp="2025-12-09 02:35:43 +0000 UTC" firstStartedPulling="2025-12-09 02:35:43.768987146 +0000 UTC m=+6.194100600" lastFinishedPulling="2025-12-09 02:35:46.048696361 +0000 UTC m=+8.473809818" observedRunningTime="2025-12-09 02:35:46.726074677 +0000 UTC m=+9.151188140" watchObservedRunningTime="2025-12-09 02:35:46.72623723 +0000 UTC m=+9.151350693"
	Dec 09 02:35:47 no-preload-185074 kubelet[2181]: E1209 02:35:47.251990    2181 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-185074" containerName="kube-apiserver"
	Dec 09 02:35:49 no-preload-185074 kubelet[2181]: E1209 02:35:49.484814    2181 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-185074" containerName="etcd"
	Dec 09 02:35:50 no-preload-185074 kubelet[2181]: E1209 02:35:50.108523    2181 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-185074" containerName="kube-scheduler"
	Dec 09 02:35:51 no-preload-185074 kubelet[2181]: E1209 02:35:51.697467    2181 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-185074" containerName="kube-controller-manager"
	Dec 09 02:35:56 no-preload-185074 kubelet[2181]: I1209 02:35:56.782665    2181 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 09 02:35:56 no-preload-185074 kubelet[2181]: I1209 02:35:56.897135    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4dc5\" (UniqueName: \"kubernetes.io/projected/11973463-7b09-4a70-ba86-1a54c90ed6e5-kube-api-access-r4dc5\") pod \"coredns-7d764666f9-m6tbs\" (UID: \"11973463-7b09-4a70-ba86-1a54c90ed6e5\") " pod="kube-system/coredns-7d764666f9-m6tbs"
	Dec 09 02:35:56 no-preload-185074 kubelet[2181]: I1209 02:35:56.897184    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11973463-7b09-4a70-ba86-1a54c90ed6e5-config-volume\") pod \"coredns-7d764666f9-m6tbs\" (UID: \"11973463-7b09-4a70-ba86-1a54c90ed6e5\") " pod="kube-system/coredns-7d764666f9-m6tbs"
	Dec 09 02:35:56 no-preload-185074 kubelet[2181]: I1209 02:35:56.897211    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/04833b92-89ee-467b-8b6d-27fdfa7ddb79-tmp\") pod \"storage-provisioner\" (UID: \"04833b92-89ee-467b-8b6d-27fdfa7ddb79\") " pod="kube-system/storage-provisioner"
	Dec 09 02:35:56 no-preload-185074 kubelet[2181]: I1209 02:35:56.897362    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv98s\" (UniqueName: \"kubernetes.io/projected/04833b92-89ee-467b-8b6d-27fdfa7ddb79-kube-api-access-cv98s\") pod \"storage-provisioner\" (UID: \"04833b92-89ee-467b-8b6d-27fdfa7ddb79\") " pod="kube-system/storage-provisioner"
	Dec 09 02:35:57 no-preload-185074 kubelet[2181]: E1209 02:35:57.256339    2181 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-185074" containerName="kube-apiserver"
	Dec 09 02:35:57 no-preload-185074 kubelet[2181]: E1209 02:35:57.738333    2181 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-m6tbs" containerName="coredns"
	Dec 09 02:35:57 no-preload-185074 kubelet[2181]: I1209 02:35:57.746506    2181 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.746488869 podStartE2EDuration="14.746488869s" podCreationTimestamp="2025-12-09 02:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:57.74630264 +0000 UTC m=+20.171416103" watchObservedRunningTime="2025-12-09 02:35:57.746488869 +0000 UTC m=+20.171602331"
	Dec 09 02:35:57 no-preload-185074 kubelet[2181]: I1209 02:35:57.755717    2181 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-m6tbs" podStartSLOduration=14.755700856 podStartE2EDuration="14.755700856s" podCreationTimestamp="2025-12-09 02:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:35:57.755699518 +0000 UTC m=+20.180812986" watchObservedRunningTime="2025-12-09 02:35:57.755700856 +0000 UTC m=+20.180814320"
	Dec 09 02:35:58 no-preload-185074 kubelet[2181]: E1209 02:35:58.740331    2181 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-m6tbs" containerName="coredns"
	Dec 09 02:35:59 no-preload-185074 kubelet[2181]: I1209 02:35:59.714708    2181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97w7\" (UniqueName: \"kubernetes.io/projected/e17362a9-2cc3-4357-81a8-d1ec477fcb7f-kube-api-access-s97w7\") pod \"busybox\" (UID: \"e17362a9-2cc3-4357-81a8-d1ec477fcb7f\") " pod="default/busybox"
	Dec 09 02:35:59 no-preload-185074 kubelet[2181]: E1209 02:35:59.742162    2181 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-m6tbs" containerName="coredns"
	
	
	==> storage-provisioner [5f11b9c43fdb5e18a2516c56bfbb63c6277da0f7e64194aa6e9c5f04d178d56a] <==
	I1209 02:35:57.163253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:35:57.171194       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:35:57.171244       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:35:57.173289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:57.177788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:35:57.177949       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:35:57.178002       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49513343-9b98-4fd9-a16e-c626e02acaeb", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-185074_881496cb-8235-4710-8d27-70491f3a831b became leader
	I1209 02:35:57.178143       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-185074_881496cb-8235-4710-8d27-70491f3a831b!
	W1209 02:35:57.180136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:57.184021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:35:57.278394       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-185074_881496cb-8235-4710-8d27-70491f3a831b!
	W1209 02:35:59.187512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:35:59.191465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:01.194485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:01.200012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:03.202740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:03.208606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:05.211890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:05.216147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:07.219161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:07.222841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:09.226749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:36:09.231528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-185074 -n no-preload-185074
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-185074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.457801ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:36:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-828614
helpers_test.go:243: (dbg) docker inspect newest-cni-828614:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b",
	        "Created": "2025-12-09T02:36:13.995817577Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297691,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:36:14.026056774Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/hostname",
	        "HostsPath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/hosts",
	        "LogPath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b-json.log",
	        "Name": "/newest-cni-828614",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-828614:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-828614",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b",
	                "LowerDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-828614",
	                "Source": "/var/lib/docker/volumes/newest-cni-828614/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-828614",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-828614",
	                "name.minikube.sigs.k8s.io": "newest-cni-828614",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9df8eb84143dd4aa40b3472d466758fae4bc7fa0338022b9581ce613ad421cc4",
	            "SandboxKey": "/var/run/docker/netns/9df8eb84143d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-828614": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfdf3df1d734c42201a8f8b2262b719bd3d94c4522be0d2bca9d7ea31c9d112b",
	                    "EndpointID": "31439d2895a2c5b7b9dfb3da4844c7f94f9add394c313487dab42b54da0a167d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "32:32:f8:79:32:1c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-828614",
	                        "bdcb940dfa8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828614 -n newest-cni-828614
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-828614 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-828614 logs -n 25: (1.121990403s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:34 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p kubernetes-upgrade-190944                                                                                                                                                                                                                         │ kubernetes-upgrade-190944    │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ ssh     │ cert-options-465214 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                          │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ ssh     │ -p cert-options-465214 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                        │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p cert-options-465214                                                                                                                                                                                                                               │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p running-upgrade-099378                                                                                                                                                                                                                            │ running-upgrade-099378       │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-126117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-512414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p cert-expiration-572052 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p old-k8s-version-126117 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p default-k8s-diff-port-512414 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ delete  │ -p cert-expiration-572052                                                                                                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p no-preload-185074 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-126117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-512414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-185074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:36:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:36:28.383841  302799 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:36:28.384168  302799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:28.384180  302799 out.go:374] Setting ErrFile to fd 2...
	I1209 02:36:28.384186  302799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:28.384537  302799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:36:28.385123  302799 out.go:368] Setting JSON to false
	I1209 02:36:28.386588  302799 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4737,"bootTime":1765243051,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:36:28.386681  302799 start.go:143] virtualization: kvm guest
	I1209 02:36:28.388488  302799 out.go:179] * [no-preload-185074] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:36:28.389660  302799 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:36:28.389705  302799 notify.go:221] Checking for updates...
	I1209 02:36:28.391622  302799 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:36:28.392754  302799 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:28.393801  302799 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:36:28.394819  302799 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:36:28.396755  302799 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:36:28.398403  302799 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:36:28.399123  302799 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:36:28.429446  302799 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:36:28.429622  302799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:28.503308  302799 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-09 02:36:28.489610651 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:28.503420  302799 docker.go:319] overlay module found
	I1209 02:36:28.505340  302799 out.go:179] * Using the docker driver based on existing profile
	I1209 02:36:28.506680  302799 start.go:309] selected driver: docker
	I1209 02:36:28.506697  302799 start.go:927] validating driver "docker" against &{Name:no-preload-185074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-185074 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:28.506839  302799 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:36:28.507469  302799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:28.574884  302799 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-09 02:36:28.563405186 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:28.575239  302799 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:36:28.575271  302799 cni.go:84] Creating CNI manager for ""
	I1209 02:36:28.575336  302799 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:36:28.575387  302799 start.go:353] cluster config:
	{Name:no-preload-185074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-185074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:28.577037  302799 out.go:179] * Starting "no-preload-185074" primary control-plane node in "no-preload-185074" cluster
	I1209 02:36:28.578108  302799 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:36:28.579597  302799 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:36:27.270472  300341 ssh_runner.go:195] Run: cat /version.json
	I1209 02:36:27.270547  300341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:36:27.270547  300341 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:36:27.270620  300341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:36:27.288464  300341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa Username:docker}
	I1209 02:36:27.290380  300341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa Username:docker}
	I1209 02:36:27.442555  300341 ssh_runner.go:195] Run: systemctl --version
	I1209 02:36:27.450074  300341 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:36:27.491528  300341 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:36:27.496355  300341 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:36:27.496429  300341 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:36:27.504286  300341 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 02:36:27.504308  300341 start.go:496] detecting cgroup driver to use...
	I1209 02:36:27.504341  300341 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:36:27.504392  300341 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:36:27.517921  300341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:36:27.529789  300341 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:36:27.529843  300341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:36:27.546779  300341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:36:27.564216  300341 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:36:27.669919  300341 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:36:27.758023  300341 docker.go:234] disabling docker service ...
	I1209 02:36:27.758091  300341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:36:27.773083  300341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:36:27.790513  300341 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:36:27.883049  300341 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:36:28.031747  300341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:36:28.055451  300341 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:36:28.085097  300341 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:36:28.085263  300341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:28.101283  300341 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:36:28.101525  300341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:28.116228  300341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:28.133009  300341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:28.146659  300341 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:36:28.157864  300341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:28.175877  300341 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:28.189723  300341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:28.201975  300341 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:36:28.210149  300341 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:36:28.217981  300341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:28.324879  300341 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:36:28.485209  300341 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:36:28.485273  300341 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:36:28.489769  300341 start.go:564] Will wait 60s for crictl version
	I1209 02:36:28.489828  300341 ssh_runner.go:195] Run: which crictl
	I1209 02:36:28.494708  300341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:36:28.524068  300341 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:36:28.524144  300341 ssh_runner.go:195] Run: crio --version
	I1209 02:36:28.559311  300341 ssh_runner.go:195] Run: crio --version
	I1209 02:36:28.612957  300341 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 02:36:28.580732  302799 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 02:36:28.580888  302799 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/no-preload-185074/config.json ...
	I1209 02:36:28.581232  302799 cache.go:107] acquiring lock: {Name:mkc105b9a44fd3c9968e5192dc4bf826b3abaf24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:28.581314  302799 cache.go:115] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 02:36:28.581324  302799 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.548µs
	I1209 02:36:28.581337  302799 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 02:36:28.581354  302799 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:36:28.581468  302799 cache.go:107] acquiring lock: {Name:mke7470c1eb724c523ab497f3a16eb5bb1521ad4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:28.581564  302799 cache.go:115] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1209 02:36:28.581574  302799 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 119.784µs
	I1209 02:36:28.581605  302799 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1209 02:36:28.581623  302799 cache.go:107] acquiring lock: {Name:mk8d63172c5d7f1403b6b8861c7dd4663d65048d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:28.581684  302799 cache.go:115] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1209 02:36:28.581691  302799 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 72.503µs
	I1209 02:36:28.581699  302799 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1209 02:36:28.581713  302799 cache.go:107] acquiring lock: {Name:mka5a3c5c24fda10df694ae8ae3e6b064117e413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:28.581747  302799 cache.go:115] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1209 02:36:28.581755  302799 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 45.551µs
	I1209 02:36:28.581763  302799 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1209 02:36:28.581778  302799 cache.go:107] acquiring lock: {Name:mk7e4e5ab18e0e7421e936e35dcb7d7121a4399f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:28.581778  302799 cache.go:107] acquiring lock: {Name:mk34028a346088b29d6ca0261e76500d308d9a7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:28.581815  302799 cache.go:115] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1209 02:36:28.581825  302799 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.41µs
	I1209 02:36:28.581833  302799 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1209 02:36:28.581846  302799 cache.go:115] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1209 02:36:28.581855  302799 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 85.707µs
	I1209 02:36:28.581863  302799 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1209 02:36:28.581849  302799 cache.go:107] acquiring lock: {Name:mk585695c5fd1279a81b0d7872a3a883eec10270 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:28.581878  302799 cache.go:107] acquiring lock: {Name:mk42530d780dd1691de21c0f25d371e3b1c9f248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:28.581895  302799 cache.go:115] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1209 02:36:28.581902  302799 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 55.575µs
	I1209 02:36:28.581909  302799 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1209 02:36:28.581914  302799 cache.go:115] /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1209 02:36:28.581920  302799 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 45.086µs
	I1209 02:36:28.581928  302799 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22081-11001/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1209 02:36:28.581949  302799 cache.go:87] Successfully saved all images to host disk.
	I1209 02:36:28.616220  302799 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:36:28.616243  302799 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:36:28.616260  302799 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:36:28.616381  302799 start.go:360] acquireMachinesLock for no-preload-185074: {Name:mka48553c2ceac9cc3a685636dbe231c15eb1c0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:28.616503  302799 start.go:364] duration metric: took 100.703µs to acquireMachinesLock for "no-preload-185074"
	I1209 02:36:28.616559  302799 start.go:96] Skipping create...Using existing machine configuration
	I1209 02:36:28.616567  302799 fix.go:54] fixHost starting: 
	I1209 02:36:28.616956  302799 cli_runner.go:164] Run: docker container inspect no-preload-185074 --format={{.State.Status}}
	I1209 02:36:28.644884  302799 fix.go:112] recreateIfNeeded on no-preload-185074: state=Stopped err=<nil>
	W1209 02:36:28.644913  302799 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 02:36:26.141790  299506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:36:26.142857  299506 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 02:36:26.142876  299506 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 02:36:26.142935  299506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:36:26.166109  299506 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 02:36:26.166139  299506 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 02:36:26.166206  299506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:36:26.182070  299506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:36:26.191699  299506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:36:26.202958  299506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:36:26.283515  299506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:36:26.296432  299506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:36:26.300097  299506 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-126117" to be "Ready" ...
	I1209 02:36:26.303835  299506 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 02:36:26.303857  299506 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 02:36:26.315588  299506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 02:36:26.320743  299506 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 02:36:26.320761  299506 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 02:36:26.340936  299506 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 02:36:26.340962  299506 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 02:36:26.358588  299506 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 02:36:26.358605  299506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 02:36:26.375027  299506 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 02:36:26.375047  299506 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 02:36:26.392481  299506 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 02:36:26.392500  299506 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 02:36:26.408284  299506 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 02:36:26.408317  299506 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 02:36:26.421278  299506 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 02:36:26.421304  299506 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 02:36:26.435847  299506 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 02:36:26.435866  299506 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 02:36:26.450185  299506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 02:36:28.093284  299506 node_ready.go:49] node "old-k8s-version-126117" is "Ready"
	I1209 02:36:28.093318  299506 node_ready.go:38] duration metric: took 1.793177903s for node "old-k8s-version-126117" to be "Ready" ...
	I1209 02:36:28.093335  299506 api_server.go:52] waiting for apiserver process to appear ...
	I1209 02:36:28.093382  299506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:36:29.019226  299506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.722733649s)
	I1209 02:36:29.019257  299506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.703636697s)
	I1209 02:36:24.714053  296554 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 02:36:24.718020  296554 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1209 02:36:24.718038  296554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 02:36:24.730157  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 02:36:24.947751  296554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 02:36:24.947863  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:24.947883  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-828614 minikube.k8s.io/updated_at=2025_12_09T02_36_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=newest-cni-828614 minikube.k8s.io/primary=true
	I1209 02:36:24.959188  296554 ops.go:34] apiserver oom_adj: -16
	I1209 02:36:25.036069  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:25.536834  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:26.036444  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:26.536350  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:27.036864  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:27.536309  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:28.036333  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:28.536547  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:29.036201  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:29.516572  299506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.066332035s)
	I1209 02:36:29.516656  299506 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.42324243s)
	I1209 02:36:29.516687  299506 api_server.go:72] duration metric: took 3.413271709s to wait for apiserver process to appear ...
	I1209 02:36:29.516702  299506 api_server.go:88] waiting for apiserver healthz status ...
	I1209 02:36:29.516723  299506 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1209 02:36:29.518110  299506 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-126117 addons enable metrics-server
	
	I1209 02:36:29.519146  299506 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1209 02:36:29.536542  296554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:36:29.610844  296554 kubeadm.go:1114] duration metric: took 4.663047542s to wait for elevateKubeSystemPrivileges
	I1209 02:36:29.610880  296554 kubeadm.go:403] duration metric: took 11.750791734s to StartCluster
	I1209 02:36:29.610902  296554 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:29.610973  296554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:29.611919  296554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:29.612163  296554 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:36:29.612204  296554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 02:36:29.612214  296554 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 02:36:29.612307  296554 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-828614"
	I1209 02:36:29.612323  296554 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-828614"
	I1209 02:36:29.612359  296554 host.go:66] Checking if "newest-cni-828614" exists ...
	I1209 02:36:29.612396  296554 addons.go:70] Setting default-storageclass=true in profile "newest-cni-828614"
	I1209 02:36:29.612436  296554 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-828614"
	I1209 02:36:29.612437  296554 config.go:182] Loaded profile config "newest-cni-828614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:36:29.612769  296554 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:29.612913  296554 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:29.613751  296554 out.go:179] * Verifying Kubernetes components...
	I1209 02:36:29.615121  296554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:29.638020  296554 addons.go:239] Setting addon default-storageclass=true in "newest-cni-828614"
	I1209 02:36:29.638054  296554 host.go:66] Checking if "newest-cni-828614" exists ...
	I1209 02:36:29.638413  296554 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:29.638866  296554 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:36:28.614065  300341 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-512414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:36:28.643498  300341 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 02:36:28.648292  300341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:36:28.662654  300341 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-512414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-512414 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:36:28.662923  300341 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:36:28.662979  300341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:36:28.711661  300341 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:36:28.711687  300341 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:36:28.711743  300341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:36:28.760606  300341 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:36:28.760647  300341 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:36:28.760657  300341 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1209 02:36:28.760782  300341 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-512414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-512414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:36:28.760881  300341 ssh_runner.go:195] Run: crio config
	I1209 02:36:28.848760  300341 cni.go:84] Creating CNI manager for ""
	I1209 02:36:28.848782  300341 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:36:28.848797  300341 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:36:28.848826  300341 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-512414 NodeName:default-k8s-diff-port-512414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:36:28.848980  300341 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-512414"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:36:28.849048  300341 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 02:36:28.862134  300341 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:36:28.862212  300341 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:36:28.872581  300341 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1209 02:36:28.891256  300341 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:36:28.909447  300341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1209 02:36:28.926406  300341 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:36:28.932427  300341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:36:28.946831  300341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:29.075906  300341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:36:29.102199  300341 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414 for IP: 192.168.76.2
	I1209 02:36:29.102294  300341 certs.go:195] generating shared ca certs ...
	I1209 02:36:29.102333  300341 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:29.102553  300341 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:36:29.102777  300341 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:36:29.102820  300341 certs.go:257] generating profile certs ...
	I1209 02:36:29.102955  300341 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/client.key
	I1209 02:36:29.103157  300341 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.key.907630c7
	I1209 02:36:29.103219  300341 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.key
	I1209 02:36:29.103366  300341 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:36:29.103418  300341 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:36:29.103432  300341 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:36:29.103468  300341 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:36:29.103502  300341 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:36:29.103533  300341 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:36:29.103591  300341 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:36:29.104416  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:36:29.130147  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:36:29.159712  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:36:29.190165  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:36:29.223331  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 02:36:29.259140  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:36:29.287475  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:36:29.315140  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/default-k8s-diff-port-512414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:36:29.338039  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:36:29.359196  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:36:29.391764  300341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:36:29.421482  300341 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:36:29.439120  300341 ssh_runner.go:195] Run: openssl version
	I1209 02:36:29.447442  300341 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:36:29.457937  300341 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:36:29.467870  300341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:36:29.473143  300341 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:36:29.473204  300341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:36:29.522261  300341 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:36:29.531169  300341 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:29.539895  300341 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:36:29.547923  300341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:29.552974  300341 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:29.553026  300341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:29.597580  300341 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:36:29.606281  300341 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:36:29.615170  300341 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:36:29.625285  300341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:36:29.630388  300341 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:36:29.630446  300341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:36:29.682097  300341 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:36:29.695083  300341 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:36:29.700771  300341 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 02:36:29.760285  300341 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 02:36:29.821654  300341 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 02:36:29.888791  300341 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 02:36:29.957744  300341 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 02:36:30.018057  300341 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 02:36:30.080128  300341 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-512414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-512414 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:30.080229  300341 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:36:30.080302  300341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:36:30.124669  300341 cri.go:89] found id: "5e7dc88fe52e694684d7007065cba87c04d380ba1290283d9662ad6f91aaafe2"
	I1209 02:36:30.124753  300341 cri.go:89] found id: "53e2ef1a8035d284e5ca2d86b22685fdbc319dbfa71b2b00d3a4fda9676fdacd"
	I1209 02:36:30.124779  300341 cri.go:89] found id: "08b84802df75faab1ac51f0d9397731ef50a3cf06d6bc33889322842ab9894e6"
	I1209 02:36:30.124805  300341 cri.go:89] found id: "59648f3bd410e19a0b3346422e261893be00390058d6e433840a3d0576f9f237"
	I1209 02:36:30.124834  300341 cri.go:89] found id: ""
	I1209 02:36:30.124910  300341 ssh_runner.go:195] Run: sudo runc list -f json
	W1209 02:36:30.144602  300341 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:36:30Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:36:30.144705  300341 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:36:30.155991  300341 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 02:36:30.156022  300341 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 02:36:30.156061  300341 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 02:36:30.167562  300341 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:36:30.168454  300341 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-512414" does not appear in /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:30.168974  300341 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-11001/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-512414" cluster setting kubeconfig missing "default-k8s-diff-port-512414" context setting]
	I1209 02:36:30.169954  300341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:30.172365  300341 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 02:36:30.182688  300341 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1209 02:36:30.182733  300341 kubeadm.go:602] duration metric: took 26.700361ms to restartPrimaryControlPlane
	I1209 02:36:30.182744  300341 kubeadm.go:403] duration metric: took 102.62426ms to StartCluster
	I1209 02:36:30.182759  300341 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:30.182824  300341 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:30.184377  300341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:30.184619  300341 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:36:30.184870  300341 config.go:182] Loaded profile config "default-k8s-diff-port-512414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:36:30.184924  300341 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 02:36:30.185007  300341 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-512414"
	I1209 02:36:30.185029  300341 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-512414"
	W1209 02:36:30.185042  300341 addons.go:248] addon storage-provisioner should already be in state true
	I1209 02:36:30.185065  300341 host.go:66] Checking if "default-k8s-diff-port-512414" exists ...
	I1209 02:36:30.185520  300341 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-512414"
	I1209 02:36:30.185544  300341 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-512414"
	W1209 02:36:30.185552  300341 addons.go:248] addon dashboard should already be in state true
	I1209 02:36:30.185584  300341 host.go:66] Checking if "default-k8s-diff-port-512414" exists ...
	I1209 02:36:30.186056  300341 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:36:30.186323  300341 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-512414"
	I1209 02:36:30.186359  300341 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-512414"
	I1209 02:36:30.186986  300341 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:36:30.187172  300341 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:36:30.187230  300341 out.go:179] * Verifying Kubernetes components...
	I1209 02:36:30.191171  300341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:30.219969  300341 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-512414"
	W1209 02:36:30.220003  300341 addons.go:248] addon default-storageclass should already be in state true
	I1209 02:36:30.220033  300341 host.go:66] Checking if "default-k8s-diff-port-512414" exists ...
	I1209 02:36:30.220458  300341 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:36:30.222037  300341 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:36:30.222054  300341 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 02:36:30.223086  300341 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:36:30.223105  300341 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 02:36:30.223162  300341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:36:30.227558  300341 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 02:36:29.640666  296554 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:36:29.640684  296554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 02:36:29.640731  296554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:29.667268  296554 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 02:36:29.667292  296554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 02:36:29.667432  296554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:29.674169  296554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:29.692793  296554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:29.730791  296554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 02:36:29.774382  296554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:36:29.812458  296554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:36:29.831588  296554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 02:36:30.032256  296554 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1209 02:36:30.033339  296554 api_server.go:52] waiting for apiserver process to appear ...
	I1209 02:36:30.033390  296554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:36:30.299882  296554 api_server.go:72] duration metric: took 687.685238ms to wait for apiserver process to appear ...
	I1209 02:36:30.299910  296554 api_server.go:88] waiting for apiserver healthz status ...
	I1209 02:36:30.299940  296554 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:30.309822  296554 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1209 02:36:30.311457  296554 api_server.go:141] control plane version: v1.35.0-beta.0
	I1209 02:36:30.311484  296554 api_server.go:131] duration metric: took 11.566125ms to wait for apiserver health ...
	I1209 02:36:30.311495  296554 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 02:36:30.314625  296554 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1209 02:36:30.315558  296554 addons.go:530] duration metric: took 703.338765ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 02:36:30.316231  296554 system_pods.go:59] 8 kube-system pods found
	I1209 02:36:30.316272  296554 system_pods.go:61] "coredns-7d764666f9-2gmfb" [07cf9a9f-2b91-4573-9b7e-a960d3bdbc45] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1209 02:36:30.316287  296554 system_pods.go:61] "etcd-newest-cni-828614" [b40c8743-bfbf-43e7-a4ad-3ae1cb4114e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:36:30.316296  296554 system_pods.go:61] "kindnet-fdwzs" [eca30b43-2f4e-4789-8909-c1b9da3b9569] Running
	I1209 02:36:30.316467  296554 system_pods.go:61] "kube-apiserver-newest-cni-828614" [12d6ff53-a8bd-4fa7-93ec-842147989244] Running
	I1209 02:36:30.316485  296554 system_pods.go:61] "kube-controller-manager-newest-cni-828614" [05280260-1034-4afd-8ff7-40b3acf1ef06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:36:30.316717  296554 system_pods.go:61] "kube-proxy-lh72l" [2042b849-e922-4790-9104-b640df5ee37b] Running
	I1209 02:36:30.316739  296554 system_pods.go:61] "kube-scheduler-newest-cni-828614" [ff30f0c3-21ae-40f2-bcb4-9b54dfca1e19] Running
	I1209 02:36:30.316749  296554 system_pods.go:61] "storage-provisioner" [8ed7e008-713f-42f7-9e3b-83bd745a2ebd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1209 02:36:30.316757  296554 system_pods.go:74] duration metric: took 5.254854ms to wait for pod list to return data ...
	I1209 02:36:30.316808  296554 default_sa.go:34] waiting for default service account to be created ...
	I1209 02:36:30.320208  296554 default_sa.go:45] found service account: "default"
	I1209 02:36:30.320234  296554 default_sa.go:55] duration metric: took 3.412917ms for default service account to be created ...
	I1209 02:36:30.320247  296554 kubeadm.go:587] duration metric: took 708.054226ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 02:36:30.320282  296554 node_conditions.go:102] verifying NodePressure condition ...
	I1209 02:36:30.323174  296554 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 02:36:30.323211  296554 node_conditions.go:123] node cpu capacity is 8
	I1209 02:36:30.323228  296554 node_conditions.go:105] duration metric: took 2.934918ms to run NodePressure ...
	I1209 02:36:30.323246  296554 start.go:242] waiting for startup goroutines ...
	I1209 02:36:30.536300  296554 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-828614" context rescaled to 1 replicas
	I1209 02:36:30.536334  296554 start.go:247] waiting for cluster config update ...
	I1209 02:36:30.536348  296554 start.go:256] writing updated cluster config ...
	I1209 02:36:30.536597  296554 ssh_runner.go:195] Run: rm -f paused
	I1209 02:36:30.599586  296554 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1209 02:36:30.601784  296554 out.go:179] * Done! kubectl is now configured to use "newest-cni-828614" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.690444173Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.693736168Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d223fea5-0158-4cc8-a5fa-864fb688e232 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.699067206Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.699995592Z" level=info msg="Ran pod sandbox 04f0c8671a4e96c312173f0843f53784943128a2f07a6b858107243057cd69a3 with infra container: kube-system/kube-proxy-lh72l/POD" id=d223fea5-0158-4cc8-a5fa-864fb688e232 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.702402886Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4513982b-c94a-4fc8-9274-95c205c78508 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.702810363Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a418378c-6d1e-4b25-803f-b15297f0721b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.705043858Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=354ea465-b20b-4ce7-ae35-bf43c4bfa287 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.705665507Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.709206552Z" level=info msg="Ran pod sandbox 857d9c8f531df0af5c6fae36d14d47fb9b6557091e5af24cd6951d5ad4d2b631 with infra container: kube-system/kindnet-fdwzs/POD" id=a418378c-6d1e-4b25-803f-b15297f0721b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.711590632Z" level=info msg="Creating container: kube-system/kube-proxy-lh72l/kube-proxy" id=de3aa377-8dd5-4466-a186-18b3e3b5c3aa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.711707254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.714219958Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3c7792ba-4b36-469d-8d78-f27b553713cf name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.715603482Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e328823e-d02e-41f2-9d70-bc899c87eee2 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.717601357Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.718883561Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.720520725Z" level=info msg="Creating container: kube-system/kindnet-fdwzs/kindnet-cni" id=9558714d-6ef2-4047-9a52-b686357e6e29 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.720608346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.725349969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.72591293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.825848212Z" level=info msg="Created container 3a2383d1911dd62ebb6c2c0e212f2f4c5389a18e42f9b64eb70f017bb8cb6449: kube-system/kindnet-fdwzs/kindnet-cni" id=9558714d-6ef2-4047-9a52-b686357e6e29 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.827225669Z" level=info msg="Starting container: 3a2383d1911dd62ebb6c2c0e212f2f4c5389a18e42f9b64eb70f017bb8cb6449" id=10b83c9c-3d49-490b-b8a3-7ca629796777 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.832126067Z" level=info msg="Started container" PID=1568 containerID=3a2383d1911dd62ebb6c2c0e212f2f4c5389a18e42f9b64eb70f017bb8cb6449 description=kube-system/kindnet-fdwzs/kindnet-cni id=10b83c9c-3d49-490b-b8a3-7ca629796777 name=/runtime.v1.RuntimeService/StartContainer sandboxID=857d9c8f531df0af5c6fae36d14d47fb9b6557091e5af24cd6951d5ad4d2b631
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.83467425Z" level=info msg="Created container c8f5d8b9a318c0010e5b07c660391ac3e9c327a2b39969be5a7427ee582be03d: kube-system/kube-proxy-lh72l/kube-proxy" id=de3aa377-8dd5-4466-a186-18b3e3b5c3aa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.835962906Z" level=info msg="Starting container: c8f5d8b9a318c0010e5b07c660391ac3e9c327a2b39969be5a7427ee582be03d" id=505a1cd0-b526-4b19-98ab-4acf7860a09f name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:29 newest-cni-828614 crio[781]: time="2025-12-09T02:36:29.84021476Z" level=info msg="Started container" PID=1570 containerID=c8f5d8b9a318c0010e5b07c660391ac3e9c327a2b39969be5a7427ee582be03d description=kube-system/kube-proxy-lh72l/kube-proxy id=505a1cd0-b526-4b19-98ab-4acf7860a09f name=/runtime.v1.RuntimeService/StartContainer sandboxID=04f0c8671a4e96c312173f0843f53784943128a2f07a6b858107243057cd69a3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3a2383d1911dd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   857d9c8f531df       kindnet-fdwzs                               kube-system
	c8f5d8b9a318c       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   2 seconds ago       Running             kube-proxy                0                   04f0c8671a4e9       kube-proxy-lh72l                            kube-system
	edd9532fc8c94       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   4e641479801f9       kube-controller-manager-newest-cni-828614   kube-system
	6c1457ec31309       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   f05acf8f41038       kube-apiserver-newest-cni-828614            kube-system
	4b32a0c4f34d3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   e32e9642b7132       etcd-newest-cni-828614                      kube-system
	8796db082dfb5       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   4c643c69e0fd3       kube-scheduler-newest-cni-828614            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-828614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-828614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=newest-cni-828614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_36_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:36:21 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-828614
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:36:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:36:24 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:36:24 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:36:24 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 09 Dec 2025 02:36:24 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-828614
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                693eaa58-e11a-4b63-aa70-2ba2e2c1dd88
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-828614                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10s
	  kube-system                 kindnet-fdwzs                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-828614             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-controller-manager-newest-cni-828614    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-lh72l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-828614             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-828614 event: Registered Node newest-cni-828614 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [4b32a0c4f34d36e05034331059f004c4e2235bc7bc1b8e3f5c826d8e2e0cb2c9] <==
	{"level":"warn","ts":"2025-12-09T02:36:20.871672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.879083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.885033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.891122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.900755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.907430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.913482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.920093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.926096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.934771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.941310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.947864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.953870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.959691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.965711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.971680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.977652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.983566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.989545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:20.995705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:21.011525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:21.017301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:21.023402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:21.029269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:21.076274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51134","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:36:32 up  1:19,  0 user,  load average: 3.17, 2.47, 1.81
	Linux newest-cni-828614 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a2383d1911dd62ebb6c2c0e212f2f4c5389a18e42f9b64eb70f017bb8cb6449] <==
	I1209 02:36:30.091989       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:36:30.092412       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1209 02:36:30.092568       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:36:30.092607       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:36:30.092730       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:36:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:36:30.304817       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:36:30.387449       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:36:30.387651       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:36:30.387871       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [6c1457ec3130969af565e0f1d23be304d63491130c1702e4f8815b150f777220] <==
	I1209 02:36:21.530717       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:21.530732       1 policy_source.go:248] refreshing policies
	E1209 02:36:21.558059       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1209 02:36:21.606037       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:36:21.607873       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1209 02:36:21.608165       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:36:21.612494       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:36:21.703379       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:36:22.409627       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1209 02:36:22.413957       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1209 02:36:22.413977       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1209 02:36:22.909804       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:36:22.945115       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:36:23.014179       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1209 02:36:23.020133       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1209 02:36:23.021377       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:36:23.027852       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:36:23.449529       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:36:24.111021       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:36:24.120017       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 02:36:24.125976       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 02:36:28.908619       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:36:28.913735       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:36:29.355147       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1209 02:36:29.405971       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [edd9532fc8c94de71f0abfc60be8aac7ecd2ea7f474ff23013450c928d7677f0] <==
	I1209 02:36:28.263557       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.264399       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.264462       1 range_allocator.go:177] "Sending events to api server"
	I1209 02:36:28.264517       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1209 02:36:28.264531       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:28.264537       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.264540       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.264597       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.264687       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.264000       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.264963       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.265015       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.266040       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.266112       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.266129       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.266180       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.266752       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:28.265016       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.268141       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.278757       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-828614" podCIDRs=["10.42.0.0/24"]
	I1209 02:36:28.278719       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.365438       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:28.365532       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:36:28.365544       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1209 02:36:28.367584       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [c8f5d8b9a318c0010e5b07c660391ac3e9c327a2b39969be5a7427ee582be03d] <==
	I1209 02:36:29.945419       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:36:30.038446       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:30.139480       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:30.139600       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1209 02:36:30.139745       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:36:30.173535       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:36:30.173694       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:36:30.184007       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:36:30.184417       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:36:30.184502       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:30.186475       1 config.go:200] "Starting service config controller"
	I1209 02:36:30.186495       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:36:30.186561       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:36:30.186574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:36:30.186778       1 config.go:309] "Starting node config controller"
	I1209 02:36:30.186788       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:36:30.186795       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:36:30.186910       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:36:30.186920       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:36:30.287773       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:36:30.288141       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:36:30.288096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8796db082dfb52fdb40a6008bcb83ad0568cdf5d6a04531e4210ffd9652c7bad] <==
	E1209 02:36:22.355763       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1209 02:36:22.356810       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1209 02:36:22.374202       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:36:22.375242       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1209 02:36:22.446742       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1209 02:36:22.447804       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1209 02:36:22.483238       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1209 02:36:22.484315       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1209 02:36:22.487431       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1209 02:36:22.488398       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1209 02:36:22.498510       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1209 02:36:22.499479       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1209 02:36:22.547905       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1209 02:36:22.549430       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1209 02:36:22.616053       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1209 02:36:22.617188       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1209 02:36:22.674610       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1209 02:36:22.675573       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1209 02:36:22.704871       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:36:22.705888       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1209 02:36:22.732120       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:36:22.733218       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1209 02:36:22.910973       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1209 02:36:22.912186       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1209 02:36:25.951903       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 09 02:36:24 newest-cni-828614 kubelet[1302]: I1209 02:36:24.986018    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-828614" podStartSLOduration=0.985999234 podStartE2EDuration="985.999234ms" podCreationTimestamp="2025-12-09 02:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:36:24.972078439 +0000 UTC m=+1.127464802" watchObservedRunningTime="2025-12-09 02:36:24.985999234 +0000 UTC m=+1.141385577"
	Dec 09 02:36:24 newest-cni-828614 kubelet[1302]: I1209 02:36:24.986208    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-828614" podStartSLOduration=2.986197801 podStartE2EDuration="2.986197801s" podCreationTimestamp="2025-12-09 02:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:36:24.986121762 +0000 UTC m=+1.141508127" watchObservedRunningTime="2025-12-09 02:36:24.986197801 +0000 UTC m=+1.141584161"
	Dec 09 02:36:24 newest-cni-828614 kubelet[1302]: I1209 02:36:24.994475    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-828614" podStartSLOduration=2.994458914 podStartE2EDuration="2.994458914s" podCreationTimestamp="2025-12-09 02:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:36:24.994302114 +0000 UTC m=+1.149688477" watchObservedRunningTime="2025-12-09 02:36:24.994458914 +0000 UTC m=+1.149845277"
	Dec 09 02:36:25 newest-cni-828614 kubelet[1302]: E1209 02:36:25.941601    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-828614" containerName="kube-apiserver"
	Dec 09 02:36:25 newest-cni-828614 kubelet[1302]: E1209 02:36:25.942248    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-828614" containerName="kube-controller-manager"
	Dec 09 02:36:25 newest-cni-828614 kubelet[1302]: E1209 02:36:25.942489    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-828614" containerName="kube-scheduler"
	Dec 09 02:36:25 newest-cni-828614 kubelet[1302]: E1209 02:36:25.942681    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-828614" containerName="etcd"
	Dec 09 02:36:26 newest-cni-828614 kubelet[1302]: E1209 02:36:26.943262    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-828614" containerName="kube-scheduler"
	Dec 09 02:36:26 newest-cni-828614 kubelet[1302]: E1209 02:36:26.943360    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-828614" containerName="etcd"
	Dec 09 02:36:26 newest-cni-828614 kubelet[1302]: E1209 02:36:26.943434    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-828614" containerName="kube-apiserver"
	Dec 09 02:36:26 newest-cni-828614 kubelet[1302]: E1209 02:36:26.943505    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-828614" containerName="kube-controller-manager"
	Dec 09 02:36:27 newest-cni-828614 kubelet[1302]: E1209 02:36:27.946569    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-828614" containerName="kube-scheduler"
	Dec 09 02:36:28 newest-cni-828614 kubelet[1302]: I1209 02:36:28.372107    1302 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 09 02:36:28 newest-cni-828614 kubelet[1302]: I1209 02:36:28.372908    1302 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 09 02:36:29 newest-cni-828614 kubelet[1302]: I1209 02:36:29.453740    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bbkl\" (UniqueName: \"kubernetes.io/projected/eca30b43-2f4e-4789-8909-c1b9da3b9569-kube-api-access-5bbkl\") pod \"kindnet-fdwzs\" (UID: \"eca30b43-2f4e-4789-8909-c1b9da3b9569\") " pod="kube-system/kindnet-fdwzs"
	Dec 09 02:36:29 newest-cni-828614 kubelet[1302]: I1209 02:36:29.453800    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2042b849-e922-4790-9104-b640df5ee37b-xtables-lock\") pod \"kube-proxy-lh72l\" (UID: \"2042b849-e922-4790-9104-b640df5ee37b\") " pod="kube-system/kube-proxy-lh72l"
	Dec 09 02:36:29 newest-cni-828614 kubelet[1302]: I1209 02:36:29.453828    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2042b849-e922-4790-9104-b640df5ee37b-lib-modules\") pod \"kube-proxy-lh72l\" (UID: \"2042b849-e922-4790-9104-b640df5ee37b\") " pod="kube-system/kube-proxy-lh72l"
	Dec 09 02:36:29 newest-cni-828614 kubelet[1302]: I1209 02:36:29.453851    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddpxm\" (UniqueName: \"kubernetes.io/projected/2042b849-e922-4790-9104-b640df5ee37b-kube-api-access-ddpxm\") pod \"kube-proxy-lh72l\" (UID: \"2042b849-e922-4790-9104-b640df5ee37b\") " pod="kube-system/kube-proxy-lh72l"
	Dec 09 02:36:29 newest-cni-828614 kubelet[1302]: I1209 02:36:29.453875    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2042b849-e922-4790-9104-b640df5ee37b-kube-proxy\") pod \"kube-proxy-lh72l\" (UID: \"2042b849-e922-4790-9104-b640df5ee37b\") " pod="kube-system/kube-proxy-lh72l"
	Dec 09 02:36:29 newest-cni-828614 kubelet[1302]: I1209 02:36:29.453897    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eca30b43-2f4e-4789-8909-c1b9da3b9569-xtables-lock\") pod \"kindnet-fdwzs\" (UID: \"eca30b43-2f4e-4789-8909-c1b9da3b9569\") " pod="kube-system/kindnet-fdwzs"
	Dec 09 02:36:29 newest-cni-828614 kubelet[1302]: I1209 02:36:29.453923    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eca30b43-2f4e-4789-8909-c1b9da3b9569-lib-modules\") pod \"kindnet-fdwzs\" (UID: \"eca30b43-2f4e-4789-8909-c1b9da3b9569\") " pod="kube-system/kindnet-fdwzs"
	Dec 09 02:36:29 newest-cni-828614 kubelet[1302]: I1209 02:36:29.453952    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eca30b43-2f4e-4789-8909-c1b9da3b9569-cni-cfg\") pod \"kindnet-fdwzs\" (UID: \"eca30b43-2f4e-4789-8909-c1b9da3b9569\") " pod="kube-system/kindnet-fdwzs"
	Dec 09 02:36:29 newest-cni-828614 kubelet[1302]: I1209 02:36:29.976357    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-fdwzs" podStartSLOduration=0.976341575 podStartE2EDuration="976.341575ms" podCreationTimestamp="2025-12-09 02:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:36:29.975611617 +0000 UTC m=+6.130997980" watchObservedRunningTime="2025-12-09 02:36:29.976341575 +0000 UTC m=+6.131727938"
	Dec 09 02:36:31 newest-cni-828614 kubelet[1302]: E1209 02:36:31.913305    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-828614" containerName="kube-scheduler"
	Dec 09 02:36:31 newest-cni-828614 kubelet[1302]: I1209 02:36:31.931214    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-lh72l" podStartSLOduration=2.931193294 podStartE2EDuration="2.931193294s" podCreationTimestamp="2025-12-09 02:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:36:29.996924543 +0000 UTC m=+6.152310907" watchObservedRunningTime="2025-12-09 02:36:31.931193294 +0000 UTC m=+8.086579657"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-828614 -n newest-cni-828614
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-828614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-2gmfb storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-828614 describe pod coredns-7d764666f9-2gmfb storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-828614 describe pod coredns-7d764666f9-2gmfb storage-provisioner: exit status 1 (71.520853ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-2gmfb" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-828614 describe pod coredns-7d764666f9-2gmfb storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-828614 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-828614 --alsologtostderr -v=1: exit status 80 (1.624365218s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-828614 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:36:58.480602  310661 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:36:58.480714  310661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:58.480725  310661 out.go:374] Setting ErrFile to fd 2...
	I1209 02:36:58.480728  310661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:58.480949  310661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:36:58.481198  310661 out.go:368] Setting JSON to false
	I1209 02:36:58.481219  310661 mustload.go:66] Loading cluster: newest-cni-828614
	I1209 02:36:58.481566  310661 config.go:182] Loaded profile config "newest-cni-828614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:36:58.481956  310661 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:58.500582  310661 host.go:66] Checking if "newest-cni-828614" exists ...
	I1209 02:36:58.500878  310661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:58.568167  310661 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-09 02:36:58.556261105 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:58.569061  310661 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-828614 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1209 02:36:58.571340  310661 out.go:179] * Pausing node newest-cni-828614 ... 
	I1209 02:36:58.572487  310661 host.go:66] Checking if "newest-cni-828614" exists ...
	I1209 02:36:58.572853  310661 ssh_runner.go:195] Run: systemctl --version
	I1209 02:36:58.572906  310661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:58.594899  310661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:58.689246  310661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:36:58.703273  310661 pause.go:52] kubelet running: true
	I1209 02:36:58.703334  310661 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:36:58.856345  310661 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:36:58.856426  310661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:36:58.919631  310661 cri.go:89] found id: "5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a"
	I1209 02:36:58.919667  310661 cri.go:89] found id: "647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef"
	I1209 02:36:58.919673  310661 cri.go:89] found id: "e9824d0ad489e885ca6035cc8d85ec86ace8a8fc1d776a270c385e57035b610b"
	I1209 02:36:58.919678  310661 cri.go:89] found id: "be62dc59aed03890f3748125b25165b69fd841b9f8eec5a745af0ab6b12cc773"
	I1209 02:36:58.919683  310661 cri.go:89] found id: "c891247687a77ff07c3e1f24a0811997a68d0f14f1469fc95b261042e6cea86a"
	I1209 02:36:58.919688  310661 cri.go:89] found id: "53c463efbb58c5c4937d116abd49a98be2bbde6c807dd13b25656abd3d57a963"
	I1209 02:36:58.919692  310661 cri.go:89] found id: ""
	I1209 02:36:58.919745  310661 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:36:58.931338  310661 retry.go:31] will retry after 240.850229ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:36:58Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:36:59.172595  310661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:36:59.185441  310661 pause.go:52] kubelet running: false
	I1209 02:36:59.185500  310661 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:36:59.300693  310661 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:36:59.300786  310661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:36:59.364714  310661 cri.go:89] found id: "5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a"
	I1209 02:36:59.364734  310661 cri.go:89] found id: "647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef"
	I1209 02:36:59.364741  310661 cri.go:89] found id: "e9824d0ad489e885ca6035cc8d85ec86ace8a8fc1d776a270c385e57035b610b"
	I1209 02:36:59.364745  310661 cri.go:89] found id: "be62dc59aed03890f3748125b25165b69fd841b9f8eec5a745af0ab6b12cc773"
	I1209 02:36:59.364749  310661 cri.go:89] found id: "c891247687a77ff07c3e1f24a0811997a68d0f14f1469fc95b261042e6cea86a"
	I1209 02:36:59.364753  310661 cri.go:89] found id: "53c463efbb58c5c4937d116abd49a98be2bbde6c807dd13b25656abd3d57a963"
	I1209 02:36:59.364757  310661 cri.go:89] found id: ""
	I1209 02:36:59.364812  310661 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:36:59.376090  310661 retry.go:31] will retry after 464.903535ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:36:59Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:36:59.841544  310661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:36:59.855462  310661 pause.go:52] kubelet running: false
	I1209 02:36:59.855518  310661 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:36:59.966966  310661 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:36:59.967046  310661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:00.027720  310661 cri.go:89] found id: "5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a"
	I1209 02:37:00.027742  310661 cri.go:89] found id: "647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef"
	I1209 02:37:00.027748  310661 cri.go:89] found id: "e9824d0ad489e885ca6035cc8d85ec86ace8a8fc1d776a270c385e57035b610b"
	I1209 02:37:00.027752  310661 cri.go:89] found id: "be62dc59aed03890f3748125b25165b69fd841b9f8eec5a745af0ab6b12cc773"
	I1209 02:37:00.027756  310661 cri.go:89] found id: "c891247687a77ff07c3e1f24a0811997a68d0f14f1469fc95b261042e6cea86a"
	I1209 02:37:00.027761  310661 cri.go:89] found id: "53c463efbb58c5c4937d116abd49a98be2bbde6c807dd13b25656abd3d57a963"
	I1209 02:37:00.027765  310661 cri.go:89] found id: ""
	I1209 02:37:00.027812  310661 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:00.041046  310661 out.go:203] 
	W1209 02:37:00.042230  310661 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 02:37:00.042242  310661 out.go:285] * 
	* 
	W1209 02:37:00.046141  310661 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 02:37:00.047330  310661 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-828614 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-828614
helpers_test.go:243: (dbg) docker inspect newest-cni-828614:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b",
	        "Created": "2025-12-09T02:36:13.995817577Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308615,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:36:48.456284916Z",
	            "FinishedAt": "2025-12-09T02:36:44.971615436Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/hostname",
	        "HostsPath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/hosts",
	        "LogPath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b-json.log",
	        "Name": "/newest-cni-828614",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-828614:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-828614",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b",
	                "LowerDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-828614",
	                "Source": "/var/lib/docker/volumes/newest-cni-828614/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-828614",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-828614",
	                "name.minikube.sigs.k8s.io": "newest-cni-828614",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5134e33931c98c8d9a528e03c2e0ecfc92e96c79e551c4d23bf0095758bf6db7",
	            "SandboxKey": "/var/run/docker/netns/5134e33931c9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-828614": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfdf3df1d734c42201a8f8b2262b719bd3d94c4522be0d2bca9d7ea31c9d112b",
	                    "EndpointID": "b873fd0365085ccb9e0f79a5b1c6a2e87a35e393d654763883a310dc9d1a62f0",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ce:83:7b:79:b6:14",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-828614",
	                        "bdcb940dfa8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828614 -n newest-cni-828614
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828614 -n newest-cni-828614: exit status 2 (308.077127ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-828614 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p cert-options-465214                                                                                                                                                                                                                               │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p running-upgrade-099378                                                                                                                                                                                                                            │ running-upgrade-099378       │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-126117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-512414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p cert-expiration-572052 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p old-k8s-version-126117 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p default-k8s-diff-port-512414 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ delete  │ -p cert-expiration-572052                                                                                                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p no-preload-185074 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-126117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-512414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-185074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ stop    │ -p newest-cni-828614 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-828614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ image   │ newest-cni-828614 image list --format=json                                                                                                                                                                                                           │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ pause   │ -p newest-cni-828614 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:36:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:36:48.169421  308375 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:36:48.169547  308375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:48.169558  308375 out.go:374] Setting ErrFile to fd 2...
	I1209 02:36:48.169565  308375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:48.169923  308375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:36:48.170497  308375 out.go:368] Setting JSON to false
	I1209 02:36:48.172116  308375 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4757,"bootTime":1765243051,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:36:48.172210  308375 start.go:143] virtualization: kvm guest
	I1209 02:36:48.175792  308375 out.go:179] * [newest-cni-828614] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:36:48.177277  308375 notify.go:221] Checking for updates...
	I1209 02:36:48.177312  308375 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:36:48.178653  308375 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:36:48.179907  308375 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:48.181392  308375 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:36:48.183002  308375 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:36:48.184938  308375 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:36:48.186841  308375 config.go:182] Loaded profile config "newest-cni-828614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:36:48.187586  308375 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:36:48.221134  308375 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:36:48.221270  308375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:48.298608  308375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-09 02:36:48.284363889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:48.298840  308375 docker.go:319] overlay module found
	I1209 02:36:48.301272  308375 out.go:179] * Using the docker driver based on existing profile
	I1209 02:36:48.302579  308375 start.go:309] selected driver: docker
	I1209 02:36:48.302606  308375 start.go:927] validating driver "docker" against &{Name:newest-cni-828614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-828614 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:48.302754  308375 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:36:48.303505  308375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:48.372512  308375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-09 02:36:48.360070101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:48.372933  308375 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 02:36:48.372974  308375 cni.go:84] Creating CNI manager for ""
	I1209 02:36:48.373038  308375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:36:48.373095  308375 start.go:353] cluster config:
	{Name:newest-cni-828614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-828614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:48.375448  308375 out.go:179] * Starting "newest-cni-828614" primary control-plane node in "newest-cni-828614" cluster
	I1209 02:36:48.376621  308375 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:36:48.377871  308375 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	W1209 02:36:45.247158  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:36:47.747396  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:36:48.379194  308375 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 02:36:48.379230  308375 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1209 02:36:48.379244  308375 cache.go:65] Caching tarball of preloaded images
	I1209 02:36:48.379296  308375 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:36:48.379370  308375 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:36:48.379381  308375 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 02:36:48.379499  308375 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/config.json ...
	I1209 02:36:48.403757  308375 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:36:48.403775  308375 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:36:48.403796  308375 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:36:48.403839  308375 start.go:360] acquireMachinesLock for newest-cni-828614: {Name:mkab46b836c33e2166d46d2cab81ca7a184524e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:48.403902  308375 start.go:364] duration metric: took 41.149µs to acquireMachinesLock for "newest-cni-828614"
	I1209 02:36:48.403926  308375 start.go:96] Skipping create...Using existing machine configuration
	I1209 02:36:48.403936  308375 fix.go:54] fixHost starting: 
	I1209 02:36:48.404216  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:48.425774  308375 fix.go:112] recreateIfNeeded on newest-cni-828614: state=Stopped err=<nil>
	W1209 02:36:48.425811  308375 fix.go:138] unexpected machine state, will restart: <nil>
	W1209 02:36:45.623036  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	W1209 02:36:48.069934  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	W1209 02:36:48.136829  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:36:50.634041  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	I1209 02:36:48.427350  308375 out.go:252] * Restarting existing docker container for "newest-cni-828614" ...
	I1209 02:36:48.427425  308375 cli_runner.go:164] Run: docker start newest-cni-828614
	I1209 02:36:48.735303  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:48.761263  308375 kic.go:430] container "newest-cni-828614" state is running.
	I1209 02:36:48.761854  308375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-828614
	I1209 02:36:48.781379  308375 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/config.json ...
	I1209 02:36:48.781681  308375 machine.go:94] provisionDockerMachine start ...
	I1209 02:36:48.781762  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:48.802160  308375 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:48.802488  308375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1209 02:36:48.802510  308375 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:36:48.803199  308375 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45034->127.0.0.1:33093: read: connection reset by peer
	I1209 02:36:51.930316  308375 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-828614
	
	I1209 02:36:51.930342  308375 ubuntu.go:182] provisioning hostname "newest-cni-828614"
	I1209 02:36:51.930405  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:51.948305  308375 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:51.948539  308375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1209 02:36:51.948560  308375 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-828614 && echo "newest-cni-828614" | sudo tee /etc/hostname
	I1209 02:36:52.083987  308375 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-828614
	
	I1209 02:36:52.084074  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:52.103783  308375 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:52.104089  308375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1209 02:36:52.104117  308375 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-828614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-828614/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-828614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:36:52.230818  308375 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:36:52.230859  308375 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:36:52.230885  308375 ubuntu.go:190] setting up certificates
	I1209 02:36:52.230901  308375 provision.go:84] configureAuth start
	I1209 02:36:52.230968  308375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-828614
	I1209 02:36:52.249038  308375 provision.go:143] copyHostCerts
	I1209 02:36:52.249100  308375 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:36:52.249115  308375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:36:52.249191  308375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:36:52.249316  308375 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:36:52.249329  308375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:36:52.249372  308375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:36:52.249468  308375 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:36:52.249480  308375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:36:52.249524  308375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:36:52.249614  308375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.newest-cni-828614 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-828614]
	I1209 02:36:52.327226  308375 provision.go:177] copyRemoteCerts
	I1209 02:36:52.327284  308375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:36:52.327334  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:52.344919  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:52.438272  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:36:52.455238  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 02:36:52.471607  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 02:36:52.487661  308375 provision.go:87] duration metric: took 256.744072ms to configureAuth
	I1209 02:36:52.487682  308375 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:36:52.487874  308375 config.go:182] Loaded profile config "newest-cni-828614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:36:52.488037  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:52.505587  308375 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:52.505839  308375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1209 02:36:52.505855  308375 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:36:52.795438  308375 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:36:52.795463  308375 machine.go:97] duration metric: took 4.013763672s to provisionDockerMachine
	I1209 02:36:52.795489  308375 start.go:293] postStartSetup for "newest-cni-828614" (driver="docker")
	I1209 02:36:52.795512  308375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:36:52.795584  308375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:36:52.795631  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:52.814111  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:52.906157  308375 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:36:52.909534  308375 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:36:52.909555  308375 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:36:52.909565  308375 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:36:52.909610  308375 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:36:52.909702  308375 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:36:52.909795  308375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:36:52.917410  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:36:52.933628  308375 start.go:296] duration metric: took 138.127648ms for postStartSetup
	I1209 02:36:52.933708  308375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:36:52.933747  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:52.951129  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:53.039107  308375 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:36:53.043301  308375 fix.go:56] duration metric: took 4.639361802s for fixHost
	I1209 02:36:53.043316  308375 start.go:83] releasing machines lock for "newest-cni-828614", held for 4.639402111s
	I1209 02:36:53.043373  308375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-828614
	I1209 02:36:53.061292  308375 ssh_runner.go:195] Run: cat /version.json
	I1209 02:36:53.061359  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:53.061413  308375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:36:53.061468  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:53.080380  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:53.080681  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	W1209 02:36:49.755643  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:36:52.245854  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:36:53.225542  308375 ssh_runner.go:195] Run: systemctl --version
	I1209 02:36:53.231579  308375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:36:53.266132  308375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:36:53.270596  308375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:36:53.270665  308375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:36:53.278903  308375 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 02:36:53.278919  308375 start.go:496] detecting cgroup driver to use...
	I1209 02:36:53.278944  308375 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:36:53.278972  308375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:36:53.292803  308375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:36:53.304421  308375 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:36:53.304469  308375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:36:53.317876  308375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:36:53.328978  308375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:36:53.407002  308375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:36:53.486817  308375 docker.go:234] disabling docker service ...
	I1209 02:36:53.486890  308375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:36:53.499903  308375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:36:53.510929  308375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:36:53.592272  308375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:36:53.673308  308375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:36:53.685062  308375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:36:53.698627  308375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:36:53.698690  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.706852  308375 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:36:53.706898  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.715251  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.723211  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.731322  308375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:36:53.738794  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.747800  308375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.755681  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.764294  308375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:36:53.771450  308375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:36:53.778115  308375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:53.866257  308375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:36:53.997329  308375 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:36:53.997389  308375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:36:54.001435  308375 start.go:564] Will wait 60s for crictl version
	I1209 02:36:54.001503  308375 ssh_runner.go:195] Run: which crictl
	I1209 02:36:54.005035  308375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:36:54.027746  308375 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:36:54.027816  308375 ssh_runner.go:195] Run: crio --version
	I1209 02:36:54.054240  308375 ssh_runner.go:195] Run: crio --version
	I1209 02:36:54.084627  308375 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 02:36:54.085773  308375 cli_runner.go:164] Run: docker network inspect newest-cni-828614 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:36:54.103481  308375 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1209 02:36:54.107526  308375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:36:54.119356  308375 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1209 02:36:50.572796  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	W1209 02:36:53.067437  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	I1209 02:36:54.120392  308375 kubeadm.go:884] updating cluster {Name:newest-cni-828614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-828614 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:36:54.120823  308375 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 02:36:54.120915  308375 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:36:54.153979  308375 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:36:54.153998  308375 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:36:54.154043  308375 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:36:54.179378  308375 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:36:54.179397  308375 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:36:54.179404  308375 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1209 02:36:54.179489  308375 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-828614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-828614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:36:54.179556  308375 ssh_runner.go:195] Run: crio config
	I1209 02:36:54.221864  308375 cni.go:84] Creating CNI manager for ""
	I1209 02:36:54.221885  308375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:36:54.221900  308375 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 02:36:54.221929  308375 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-828614 NodeName:newest-cni-828614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:36:54.222089  308375 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-828614"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:36:54.222161  308375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 02:36:54.229937  308375 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:36:54.230003  308375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:36:54.237334  308375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 02:36:54.250616  308375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 02:36:54.262054  308375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1209 02:36:54.273802  308375 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:36:54.277066  308375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:36:54.286161  308375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:54.366262  308375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:36:54.398686  308375 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614 for IP: 192.168.94.2
	I1209 02:36:54.398708  308375 certs.go:195] generating shared ca certs ...
	I1209 02:36:54.398727  308375 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:54.398882  308375 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:36:54.398948  308375 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:36:54.398960  308375 certs.go:257] generating profile certs ...
	I1209 02:36:54.399064  308375 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/client.key
	I1209 02:36:54.399153  308375 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/apiserver.key.9c61b522
	I1209 02:36:54.399207  308375 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/proxy-client.key
	I1209 02:36:54.399337  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:36:54.399377  308375 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:36:54.399392  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:36:54.399428  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:36:54.399466  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:36:54.399498  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:36:54.399570  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:36:54.400383  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:36:54.421601  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:36:54.439931  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:36:54.458263  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:36:54.479090  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 02:36:54.498290  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:36:54.514180  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:36:54.530459  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 02:36:54.546342  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:36:54.562512  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:36:54.578803  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:36:54.596146  308375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:36:54.607525  308375 ssh_runner.go:195] Run: openssl version
	I1209 02:36:54.613378  308375 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:36:54.620031  308375 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:36:54.626742  308375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:36:54.630181  308375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:36:54.630227  308375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:36:54.665237  308375 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:36:54.672182  308375 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:54.679130  308375 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:36:54.685803  308375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:54.689274  308375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:54.689318  308375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:54.723355  308375 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:36:54.730229  308375 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:36:54.736899  308375 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:36:54.744500  308375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:36:54.748002  308375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:36:54.748048  308375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:36:54.784078  308375 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:36:54.791525  308375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:36:54.794957  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 02:36:54.828938  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 02:36:54.863678  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 02:36:54.900401  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 02:36:54.944820  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 02:36:54.989129  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 02:36:55.046308  308375 kubeadm.go:401] StartCluster: {Name:newest-cni-828614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-828614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:55.046409  308375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:36:55.046485  308375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:36:55.084504  308375 cri.go:89] found id: "e9824d0ad489e885ca6035cc8d85ec86ace8a8fc1d776a270c385e57035b610b"
	I1209 02:36:55.084527  308375 cri.go:89] found id: "be62dc59aed03890f3748125b25165b69fd841b9f8eec5a745af0ab6b12cc773"
	I1209 02:36:55.084533  308375 cri.go:89] found id: "c891247687a77ff07c3e1f24a0811997a68d0f14f1469fc95b261042e6cea86a"
	I1209 02:36:55.084538  308375 cri.go:89] found id: "53c463efbb58c5c4937d116abd49a98be2bbde6c807dd13b25656abd3d57a963"
	I1209 02:36:55.084543  308375 cri.go:89] found id: ""
	I1209 02:36:55.084586  308375 ssh_runner.go:195] Run: sudo runc list -f json
	W1209 02:36:55.097607  308375 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:36:55Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:36:55.097707  308375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:36:55.105895  308375 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 02:36:55.105917  308375 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 02:36:55.105970  308375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 02:36:55.113509  308375 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:36:55.114452  308375 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-828614" does not appear in /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:55.115079  308375 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-11001/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-828614" cluster setting kubeconfig missing "newest-cni-828614" context setting]
	I1209 02:36:55.115954  308375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:55.117911  308375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 02:36:55.125224  308375 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1209 02:36:55.125248  308375 kubeadm.go:602] duration metric: took 19.3251ms to restartPrimaryControlPlane
	I1209 02:36:55.125256  308375 kubeadm.go:403] duration metric: took 78.958734ms to StartCluster
	I1209 02:36:55.125270  308375 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:55.125325  308375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:55.127071  308375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:55.127273  308375 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:36:55.127340  308375 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 02:36:55.127442  308375 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-828614"
	I1209 02:36:55.127459  308375 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-828614"
	I1209 02:36:55.127466  308375 addons.go:70] Setting dashboard=true in profile "newest-cni-828614"
	I1209 02:36:55.127479  308375 addons.go:70] Setting default-storageclass=true in profile "newest-cni-828614"
	I1209 02:36:55.127496  308375 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-828614"
	I1209 02:36:55.127485  308375 addons.go:239] Setting addon dashboard=true in "newest-cni-828614"
	W1209 02:36:55.127510  308375 addons.go:248] addon dashboard should already be in state true
	I1209 02:36:55.127527  308375 config.go:182] Loaded profile config "newest-cni-828614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:36:55.127560  308375 host.go:66] Checking if "newest-cni-828614" exists ...
	W1209 02:36:55.127471  308375 addons.go:248] addon storage-provisioner should already be in state true
	I1209 02:36:55.127596  308375 host.go:66] Checking if "newest-cni-828614" exists ...
	I1209 02:36:55.127858  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:55.128042  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:55.128180  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:55.129550  308375 out.go:179] * Verifying Kubernetes components...
	I1209 02:36:55.134001  308375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:55.153176  308375 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 02:36:55.153748  308375 addons.go:239] Setting addon default-storageclass=true in "newest-cni-828614"
	W1209 02:36:55.153769  308375 addons.go:248] addon default-storageclass should already be in state true
	I1209 02:36:55.153793  308375 host.go:66] Checking if "newest-cni-828614" exists ...
	I1209 02:36:55.154229  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:55.155448  308375 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 02:36:55.155450  308375 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:36:55.156680  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 02:36:55.156699  308375 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 02:36:55.156748  308375 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:36:55.156770  308375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 02:36:55.156823  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:55.156752  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:55.182645  308375 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 02:36:55.182669  308375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 02:36:55.182730  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:55.190717  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:55.190729  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:55.215410  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:55.280264  308375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:36:55.293126  308375 api_server.go:52] waiting for apiserver process to appear ...
	I1209 02:36:55.293187  308375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:36:55.303074  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 02:36:55.303100  308375 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 02:36:55.304423  308375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:36:55.307166  308375 api_server.go:72] duration metric: took 179.865283ms to wait for apiserver process to appear ...
	I1209 02:36:55.307187  308375 api_server.go:88] waiting for apiserver healthz status ...
	I1209 02:36:55.307203  308375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:55.318294  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 02:36:55.318308  308375 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 02:36:55.320694  308375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 02:36:55.332795  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 02:36:55.332812  308375 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 02:36:55.346441  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 02:36:55.346463  308375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 02:36:55.360713  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 02:36:55.360735  308375 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 02:36:55.374221  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 02:36:55.374245  308375 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 02:36:55.387293  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 02:36:55.387310  308375 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 02:36:55.398678  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 02:36:55.398695  308375 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 02:36:55.410294  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 02:36:55.410311  308375 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 02:36:55.422642  308375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 02:36:56.496770  308375 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 02:36:56.496797  308375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 02:36:56.496812  308375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:56.505930  308375 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 02:36:56.505954  308375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 02:36:56.807696  308375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:56.812091  308375 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 02:36:56.812119  308375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 02:36:57.009027  308375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.704572783s)
	I1209 02:36:57.009101  308375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.68838032s)
	I1209 02:36:57.009207  308375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.586520898s)
	I1209 02:36:57.010590  308375 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-828614 addons enable metrics-server
	
	I1209 02:36:57.019787  308375 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1209 02:36:52.634754  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:36:55.137694  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	I1209 02:36:57.020889  308375 addons.go:530] duration metric: took 1.893554234s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1209 02:36:57.307790  308375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:57.311714  308375 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 02:36:57.311737  308375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 02:36:57.807372  308375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:57.812497  308375 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1209 02:36:57.813588  308375 api_server.go:141] control plane version: v1.35.0-beta.0
	I1209 02:36:57.813611  308375 api_server.go:131] duration metric: took 2.506418126s to wait for apiserver health ...
	I1209 02:36:57.813618  308375 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 02:36:57.817421  308375 system_pods.go:59] 8 kube-system pods found
	I1209 02:36:57.817468  308375 system_pods.go:61] "coredns-7d764666f9-2gmfb" [07cf9a9f-2b91-4573-9b7e-a960d3bdbc45] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1209 02:36:57.817481  308375 system_pods.go:61] "etcd-newest-cni-828614" [b40c8743-bfbf-43e7-a4ad-3ae1cb4114e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:36:57.817496  308375 system_pods.go:61] "kindnet-fdwzs" [eca30b43-2f4e-4789-8909-c1b9da3b9569] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1209 02:36:57.817506  308375 system_pods.go:61] "kube-apiserver-newest-cni-828614" [12d6ff53-a8bd-4fa7-93ec-842147989244] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 02:36:57.817517  308375 system_pods.go:61] "kube-controller-manager-newest-cni-828614" [05280260-1034-4afd-8ff7-40b3acf1ef06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:36:57.817529  308375 system_pods.go:61] "kube-proxy-lh72l" [2042b849-e922-4790-9104-b640df5ee37b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 02:36:57.817537  308375 system_pods.go:61] "kube-scheduler-newest-cni-828614" [ff30f0c3-21ae-40f2-bcb4-9b54dfca1e19] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 02:36:57.817552  308375 system_pods.go:61] "storage-provisioner" [8ed7e008-713f-42f7-9e3b-83bd745a2ebd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1209 02:36:57.817564  308375 system_pods.go:74] duration metric: took 3.939257ms to wait for pod list to return data ...
	I1209 02:36:57.817576  308375 default_sa.go:34] waiting for default service account to be created ...
	I1209 02:36:57.819982  308375 default_sa.go:45] found service account: "default"
	I1209 02:36:57.820003  308375 default_sa.go:55] duration metric: took 2.421039ms for default service account to be created ...
	I1209 02:36:57.820014  308375 kubeadm.go:587] duration metric: took 2.692715106s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 02:36:57.820036  308375 node_conditions.go:102] verifying NodePressure condition ...
	I1209 02:36:57.822413  308375 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 02:36:57.822438  308375 node_conditions.go:123] node cpu capacity is 8
	I1209 02:36:57.822453  308375 node_conditions.go:105] duration metric: took 2.41156ms to run NodePressure ...
	I1209 02:36:57.822467  308375 start.go:242] waiting for startup goroutines ...
	I1209 02:36:57.822480  308375 start.go:247] waiting for cluster config update ...
	I1209 02:36:57.822496  308375 start.go:256] writing updated cluster config ...
	I1209 02:36:57.822759  308375 ssh_runner.go:195] Run: rm -f paused
	I1209 02:36:57.873611  308375 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1209 02:36:57.875301  308375 out.go:179] * Done! kubectl is now configured to use "newest-cni-828614" cluster and "default" namespace by default
	W1209 02:36:54.246366  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:36:56.746426  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:36:55.067946  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	W1209 02:36:57.566900  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.774495495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.777235831Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=565673ab-a6a1-4f79-8d47-e335f2f1bb57 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.777835079Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=be1dc6ca-ad50-4f6d-8bfb-473613f10b5a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.778775528Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.77919307Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.779953645Z" level=info msg="Ran pod sandbox 147df6690838f376aec3cea43b3cc245559f0d868086885957cac5cefaa17d8b with infra container: kube-system/kindnet-fdwzs/POD" id=565673ab-a6a1-4f79-8d47-e335f2f1bb57 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.780678698Z" level=info msg="Ran pod sandbox 1acad2794d491ac3860220d479c79b87672fedbf1f780a4bd8e187372d6c83ad with infra container: kube-system/kube-proxy-lh72l/POD" id=be1dc6ca-ad50-4f6d-8bfb-473613f10b5a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.781902944Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=043846d3-2098-4e96-ae5f-99c7a3ced658 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.78207167Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=8a266551-a903-4475-a9e1-eaf6366abac6 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.782833145Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=83cca73b-e6d6-48fc-8adf-307dd1b36638 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.782923898Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ec726097-0ddf-4a34-a529-7fff85ceae92 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.783859242Z" level=info msg="Creating container: kube-system/kube-proxy-lh72l/kube-proxy" id=71d69aff-dfc9-4f32-b88f-8f35f17159f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.783965158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.784043415Z" level=info msg="Creating container: kube-system/kindnet-fdwzs/kindnet-cni" id=1e678e67-7591-478e-a13b-837c45907ef6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.78412392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.787861288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.788407758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.788429768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.788785466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.817175597Z" level=info msg="Created container 5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a: kube-system/kindnet-fdwzs/kindnet-cni" id=1e678e67-7591-478e-a13b-837c45907ef6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.817713753Z" level=info msg="Starting container: 5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a" id=8a8fe723-4cf5-44d3-b68a-ed03cbe62950 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.819670902Z" level=info msg="Started container" PID=1061 containerID=5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a description=kube-system/kindnet-fdwzs/kindnet-cni id=8a8fe723-4cf5-44d3-b68a-ed03cbe62950 name=/runtime.v1.RuntimeService/StartContainer sandboxID=147df6690838f376aec3cea43b3cc245559f0d868086885957cac5cefaa17d8b
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.820074766Z" level=info msg="Created container 647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef: kube-system/kube-proxy-lh72l/kube-proxy" id=71d69aff-dfc9-4f32-b88f-8f35f17159f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.820513406Z" level=info msg="Starting container: 647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef" id=93787ca3-8781-4244-861b-a9f5067909f2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.823723538Z" level=info msg="Started container" PID=1062 containerID=647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef description=kube-system/kube-proxy-lh72l/kube-proxy id=93787ca3-8781-4244-861b-a9f5067909f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1acad2794d491ac3860220d479c79b87672fedbf1f780a4bd8e187372d6c83ad
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5e99a414ba099       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   3 seconds ago       Running             kindnet-cni               1                   147df6690838f       kindnet-fdwzs                               kube-system
	647d83eb2b27a       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   3 seconds ago       Running             kube-proxy                1                   1acad2794d491       kube-proxy-lh72l                            kube-system
	e9824d0ad489e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   6 seconds ago       Running             etcd                      1                   caef6d07ee8bc       etcd-newest-cni-828614                      kube-system
	be62dc59aed03       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   6 seconds ago       Running             kube-scheduler            1                   850cff847a828       kube-scheduler-newest-cni-828614            kube-system
	c891247687a77       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   6 seconds ago       Running             kube-controller-manager   1                   7e3e468c5e7d5       kube-controller-manager-newest-cni-828614   kube-system
	53c463efbb58c       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   6 seconds ago       Running             kube-apiserver            1                   7f414cb9490b1       kube-apiserver-newest-cni-828614            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-828614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-828614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=newest-cni-828614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_36_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:36:21 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-828614
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:36:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:36:56 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:36:56 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:36:56 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 09 Dec 2025 02:36:56 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-828614
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                693eaa58-e11a-4b63-aa70-2ba2e2c1dd88
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-828614                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-fdwzs                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-newest-cni-828614             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-828614    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-lh72l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-newest-cni-828614             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  33s   node-controller  Node newest-cni-828614 event: Registered Node newest-cni-828614 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-828614 event: Registered Node newest-cni-828614 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [e9824d0ad489e885ca6035cc8d85ec86ace8a8fc1d776a270c385e57035b610b] <==
	{"level":"warn","ts":"2025-12-09T02:36:55.924855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.933116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.939052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.946052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.952540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.959229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.965360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.973134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.979666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.986303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.997628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.003828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.010842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.017985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.024523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.032184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.038371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.044445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.051408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.057603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.070458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.076601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.083017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.091953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.132714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53754","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:37:01 up  1:19,  0 user,  load average: 3.69, 2.65, 1.89
	Linux newest-cni-828614 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a] <==
	I1209 02:36:57.990470       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:36:57.990709       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1209 02:36:57.990815       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:36:57.990829       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:36:57.990846       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:36:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:36:58.286232       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:36:58.286267       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:36:58.286278       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:36:58.286528       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:36:58.786694       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:36:58.786733       1 metrics.go:72] Registering metrics
	I1209 02:36:58.786801       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [53c463efbb58c5c4937d116abd49a98be2bbde6c807dd13b25656abd3d57a963] <==
	I1209 02:36:56.579998       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 02:36:56.580074       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:56.580395       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:56.579783       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 02:36:56.580576       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 02:36:56.579797       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 02:36:56.580485       1 shared_informer.go:377] "Caches are synced"
	E1209 02:36:56.586698       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 02:36:56.587023       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 02:36:56.587204       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:36:56.608154       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:36:56.620011       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1209 02:36:56.826366       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:36:56.851057       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:36:56.865313       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:36:56.871415       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:36:56.876515       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:36:56.904722       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.31.4"}
	I1209 02:36:56.913539       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.159.98"}
	I1209 02:36:57.482565       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1209 02:37:00.233302       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:37:00.282850       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:37:00.383894       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:37:00.383894       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:37:00.433525       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [c891247687a77ff07c3e1f24a0811997a68d0f14f1469fc95b261042e6cea86a] <==
	I1209 02:36:59.747683       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.747882       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.748387       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.748900       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749223       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749480       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.748944       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749760       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749915       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749966       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749968       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749995       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750029       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750078       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750094       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750135       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750227       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750285       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750415       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750427       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.756246       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.841938       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.850276       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.850296       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:36:59.850302       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef] <==
	I1209 02:36:57.858618       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:36:57.912157       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:58.012264       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:58.012296       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1209 02:36:58.012371       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:36:58.030321       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:36:58.030386       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:36:58.035281       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:36:58.035671       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:36:58.035712       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:58.036870       1 config.go:309] "Starting node config controller"
	I1209 02:36:58.036886       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:36:58.037073       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:36:58.037118       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:36:58.037194       1 config.go:200] "Starting service config controller"
	I1209 02:36:58.037205       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:36:58.038092       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:36:58.038122       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:36:58.137889       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:36:58.137921       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:36:58.138086       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:36:58.139252       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [be62dc59aed03890f3748125b25165b69fd841b9f8eec5a745af0ab6b12cc773] <==
	I1209 02:36:55.349698       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:36:56.512616       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:36:56.512841       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:36:56.512862       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:36:56.512872       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:36:56.532961       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1209 02:36:56.532996       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:56.535415       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:56.535456       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:56.535510       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:36:56.538078       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:36:56.636849       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: E1209 02:36:56.596226     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-828614\" already exists" pod="kube-system/kube-apiserver-newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.596260     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.598987     679 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.599079     679 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.599111     679 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.600099     679 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: E1209 02:36:56.603874     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-828614\" already exists" pod="kube-system/kube-controller-manager-newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.603903     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: E1209 02:36:56.610201     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-828614\" already exists" pod="kube-system/kube-scheduler-newest-cni-828614"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.465757     679 apiserver.go:52] "Watching apiserver"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.472211     679 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: E1209 02:36:57.502666     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-828614" containerName="kube-scheduler"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: E1209 02:36:57.502813     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-828614" containerName="kube-controller-manager"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: E1209 02:36:57.502907     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-828614" containerName="kube-apiserver"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: E1209 02:36:57.503063     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-828614" containerName="etcd"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.516918     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eca30b43-2f4e-4789-8909-c1b9da3b9569-lib-modules\") pod \"kindnet-fdwzs\" (UID: \"eca30b43-2f4e-4789-8909-c1b9da3b9569\") " pod="kube-system/kindnet-fdwzs"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.517040     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eca30b43-2f4e-4789-8909-c1b9da3b9569-cni-cfg\") pod \"kindnet-fdwzs\" (UID: \"eca30b43-2f4e-4789-8909-c1b9da3b9569\") " pod="kube-system/kindnet-fdwzs"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.517071     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2042b849-e922-4790-9104-b640df5ee37b-lib-modules\") pod \"kube-proxy-lh72l\" (UID: \"2042b849-e922-4790-9104-b640df5ee37b\") " pod="kube-system/kube-proxy-lh72l"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.517293     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eca30b43-2f4e-4789-8909-c1b9da3b9569-xtables-lock\") pod \"kindnet-fdwzs\" (UID: \"eca30b43-2f4e-4789-8909-c1b9da3b9569\") " pod="kube-system/kindnet-fdwzs"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.517380     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2042b849-e922-4790-9104-b640df5ee37b-xtables-lock\") pod \"kube-proxy-lh72l\" (UID: \"2042b849-e922-4790-9104-b640df5ee37b\") " pod="kube-system/kube-proxy-lh72l"
	Dec 09 02:36:58 newest-cni-828614 kubelet[679]: E1209 02:36:58.507955     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-828614" containerName="kube-scheduler"
	Dec 09 02:36:58 newest-cni-828614 kubelet[679]: E1209 02:36:58.508091     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-828614" containerName="etcd"
	Dec 09 02:36:58 newest-cni-828614 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:36:58 newest-cni-828614 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:36:58 newest-cni-828614 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-828614 -n newest-cni-828614
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-828614 -n newest-cni-828614: exit status 2 (309.218645ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-828614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-2gmfb storage-provisioner dashboard-metrics-scraper-867fb5f87b-9fnkd kubernetes-dashboard-b84665fb8-kj4w9
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-828614 describe pod coredns-7d764666f9-2gmfb storage-provisioner dashboard-metrics-scraper-867fb5f87b-9fnkd kubernetes-dashboard-b84665fb8-kj4w9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-828614 describe pod coredns-7d764666f9-2gmfb storage-provisioner dashboard-metrics-scraper-867fb5f87b-9fnkd kubernetes-dashboard-b84665fb8-kj4w9: exit status 1 (57.517799ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-2gmfb" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-9fnkd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-kj4w9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-828614 describe pod coredns-7d764666f9-2gmfb storage-provisioner dashboard-metrics-scraper-867fb5f87b-9fnkd kubernetes-dashboard-b84665fb8-kj4w9: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-828614
helpers_test.go:243: (dbg) docker inspect newest-cni-828614:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b",
	        "Created": "2025-12-09T02:36:13.995817577Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308615,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:36:48.456284916Z",
	            "FinishedAt": "2025-12-09T02:36:44.971615436Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/hostname",
	        "HostsPath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/hosts",
	        "LogPath": "/var/lib/docker/containers/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b/bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b-json.log",
	        "Name": "/newest-cni-828614",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-828614:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-828614",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bdcb940dfa8f0f0bd69a566cecaf1b258564375fece4871c7e49282c845e370b",
	                "LowerDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc1063782d43de6d7434575d98eb2ae79f1a5929dbb9092c6d8c069790cc3f9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-828614",
	                "Source": "/var/lib/docker/volumes/newest-cni-828614/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-828614",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-828614",
	                "name.minikube.sigs.k8s.io": "newest-cni-828614",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5134e33931c98c8d9a528e03c2e0ecfc92e96c79e551c4d23bf0095758bf6db7",
	            "SandboxKey": "/var/run/docker/netns/5134e33931c9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-828614": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfdf3df1d734c42201a8f8b2262b719bd3d94c4522be0d2bca9d7ea31c9d112b",
	                    "EndpointID": "b873fd0365085ccb9e0f79a5b1c6a2e87a35e393d654763883a310dc9d1a62f0",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ce:83:7b:79:b6:14",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-828614",
	                        "bdcb940dfa8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828614 -n newest-cni-828614
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828614 -n newest-cni-828614: exit status 2 (308.575404ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-828614 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p cert-options-465214                                                                                                                                                                                                                               │ cert-options-465214          │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ delete  │ -p running-upgrade-099378                                                                                                                                                                                                                            │ running-upgrade-099378       │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:35 UTC │ 09 Dec 25 02:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-126117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-512414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p cert-expiration-572052 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p old-k8s-version-126117 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p default-k8s-diff-port-512414 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ delete  │ -p cert-expiration-572052                                                                                                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p no-preload-185074 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-126117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-512414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-185074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ stop    │ -p newest-cni-828614 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-828614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ image   │ newest-cni-828614 image list --format=json                                                                                                                                                                                                           │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ pause   │ -p newest-cni-828614 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:36:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:36:48.169421  308375 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:36:48.169547  308375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:48.169558  308375 out.go:374] Setting ErrFile to fd 2...
	I1209 02:36:48.169565  308375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:36:48.169923  308375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:36:48.170497  308375 out.go:368] Setting JSON to false
	I1209 02:36:48.172116  308375 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4757,"bootTime":1765243051,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:36:48.172210  308375 start.go:143] virtualization: kvm guest
	I1209 02:36:48.175792  308375 out.go:179] * [newest-cni-828614] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:36:48.177277  308375 notify.go:221] Checking for updates...
	I1209 02:36:48.177312  308375 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:36:48.178653  308375 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:36:48.179907  308375 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:48.181392  308375 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:36:48.183002  308375 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:36:48.184938  308375 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:36:48.186841  308375 config.go:182] Loaded profile config "newest-cni-828614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:36:48.187586  308375 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:36:48.221134  308375 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:36:48.221270  308375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:48.298608  308375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-09 02:36:48.284363889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:48.298840  308375 docker.go:319] overlay module found
	I1209 02:36:48.301272  308375 out.go:179] * Using the docker driver based on existing profile
	I1209 02:36:48.302579  308375 start.go:309] selected driver: docker
	I1209 02:36:48.302606  308375 start.go:927] validating driver "docker" against &{Name:newest-cni-828614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-828614 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:48.302754  308375 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:36:48.303505  308375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:36:48.372512  308375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-09 02:36:48.360070101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:36:48.372933  308375 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 02:36:48.372974  308375 cni.go:84] Creating CNI manager for ""
	I1209 02:36:48.373038  308375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:36:48.373095  308375 start.go:353] cluster config:
	{Name:newest-cni-828614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-828614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:48.375448  308375 out.go:179] * Starting "newest-cni-828614" primary control-plane node in "newest-cni-828614" cluster
	I1209 02:36:48.376621  308375 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:36:48.377871  308375 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	W1209 02:36:45.247158  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:36:47.747396  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:36:48.379194  308375 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 02:36:48.379230  308375 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1209 02:36:48.379244  308375 cache.go:65] Caching tarball of preloaded images
	I1209 02:36:48.379296  308375 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:36:48.379370  308375 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:36:48.379381  308375 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 02:36:48.379499  308375 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/config.json ...
	I1209 02:36:48.403757  308375 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:36:48.403775  308375 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:36:48.403796  308375 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:36:48.403839  308375 start.go:360] acquireMachinesLock for newest-cni-828614: {Name:mkab46b836c33e2166d46d2cab81ca7a184524e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:36:48.403902  308375 start.go:364] duration metric: took 41.149µs to acquireMachinesLock for "newest-cni-828614"
	I1209 02:36:48.403926  308375 start.go:96] Skipping create...Using existing machine configuration
	I1209 02:36:48.403936  308375 fix.go:54] fixHost starting: 
	I1209 02:36:48.404216  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:48.425774  308375 fix.go:112] recreateIfNeeded on newest-cni-828614: state=Stopped err=<nil>
	W1209 02:36:48.425811  308375 fix.go:138] unexpected machine state, will restart: <nil>
	W1209 02:36:45.623036  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	W1209 02:36:48.069934  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	W1209 02:36:48.136829  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:36:50.634041  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	I1209 02:36:48.427350  308375 out.go:252] * Restarting existing docker container for "newest-cni-828614" ...
	I1209 02:36:48.427425  308375 cli_runner.go:164] Run: docker start newest-cni-828614
	I1209 02:36:48.735303  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:48.761263  308375 kic.go:430] container "newest-cni-828614" state is running.
	I1209 02:36:48.761854  308375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-828614
	I1209 02:36:48.781379  308375 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/config.json ...
	I1209 02:36:48.781681  308375 machine.go:94] provisionDockerMachine start ...
	I1209 02:36:48.781762  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:48.802160  308375 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:48.802488  308375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1209 02:36:48.802510  308375 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:36:48.803199  308375 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45034->127.0.0.1:33093: read: connection reset by peer
	I1209 02:36:51.930316  308375 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-828614
	
	I1209 02:36:51.930342  308375 ubuntu.go:182] provisioning hostname "newest-cni-828614"
	I1209 02:36:51.930405  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:51.948305  308375 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:51.948539  308375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1209 02:36:51.948560  308375 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-828614 && echo "newest-cni-828614" | sudo tee /etc/hostname
	I1209 02:36:52.083987  308375 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-828614
	
	I1209 02:36:52.084074  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:52.103783  308375 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:52.104089  308375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1209 02:36:52.104117  308375 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-828614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-828614/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-828614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:36:52.230818  308375 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:36:52.230859  308375 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:36:52.230885  308375 ubuntu.go:190] setting up certificates
	I1209 02:36:52.230901  308375 provision.go:84] configureAuth start
	I1209 02:36:52.230968  308375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-828614
	I1209 02:36:52.249038  308375 provision.go:143] copyHostCerts
	I1209 02:36:52.249100  308375 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:36:52.249115  308375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:36:52.249191  308375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:36:52.249316  308375 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:36:52.249329  308375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:36:52.249372  308375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:36:52.249468  308375 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:36:52.249480  308375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:36:52.249524  308375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:36:52.249614  308375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.newest-cni-828614 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-828614]
	I1209 02:36:52.327226  308375 provision.go:177] copyRemoteCerts
	I1209 02:36:52.327284  308375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:36:52.327334  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:52.344919  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:52.438272  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:36:52.455238  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 02:36:52.471607  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 02:36:52.487661  308375 provision.go:87] duration metric: took 256.744072ms to configureAuth
	I1209 02:36:52.487682  308375 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:36:52.487874  308375 config.go:182] Loaded profile config "newest-cni-828614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:36:52.488037  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:52.505587  308375 main.go:143] libmachine: Using SSH client type: native
	I1209 02:36:52.505839  308375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1209 02:36:52.505855  308375 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:36:52.795438  308375 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:36:52.795463  308375 machine.go:97] duration metric: took 4.013763672s to provisionDockerMachine
	I1209 02:36:52.795489  308375 start.go:293] postStartSetup for "newest-cni-828614" (driver="docker")
	I1209 02:36:52.795512  308375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:36:52.795584  308375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:36:52.795631  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:52.814111  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:52.906157  308375 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:36:52.909534  308375 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:36:52.909555  308375 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:36:52.909565  308375 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:36:52.909610  308375 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:36:52.909702  308375 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:36:52.909795  308375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:36:52.917410  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:36:52.933628  308375 start.go:296] duration metric: took 138.127648ms for postStartSetup
	I1209 02:36:52.933708  308375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:36:52.933747  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:52.951129  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:53.039107  308375 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:36:53.043301  308375 fix.go:56] duration metric: took 4.639361802s for fixHost
	I1209 02:36:53.043316  308375 start.go:83] releasing machines lock for "newest-cni-828614", held for 4.639402111s
	I1209 02:36:53.043373  308375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-828614
	I1209 02:36:53.061292  308375 ssh_runner.go:195] Run: cat /version.json
	I1209 02:36:53.061359  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:53.061413  308375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:36:53.061468  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:53.080380  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:53.080681  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	W1209 02:36:49.755643  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:36:52.245854  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:36:53.225542  308375 ssh_runner.go:195] Run: systemctl --version
	I1209 02:36:53.231579  308375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:36:53.266132  308375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:36:53.270596  308375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:36:53.270665  308375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:36:53.278903  308375 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 02:36:53.278919  308375 start.go:496] detecting cgroup driver to use...
	I1209 02:36:53.278944  308375 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:36:53.278972  308375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:36:53.292803  308375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:36:53.304421  308375 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:36:53.304469  308375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:36:53.317876  308375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:36:53.328978  308375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:36:53.407002  308375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:36:53.486817  308375 docker.go:234] disabling docker service ...
	I1209 02:36:53.486890  308375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:36:53.499903  308375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:36:53.510929  308375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:36:53.592272  308375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:36:53.673308  308375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:36:53.685062  308375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:36:53.698627  308375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:36:53.698690  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.706852  308375 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:36:53.706898  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.715251  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.723211  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.731322  308375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:36:53.738794  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.747800  308375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.755681  308375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:36:53.764294  308375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:36:53.771450  308375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:36:53.778115  308375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:53.866257  308375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:36:53.997329  308375 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:36:53.997389  308375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:36:54.001435  308375 start.go:564] Will wait 60s for crictl version
	I1209 02:36:54.001503  308375 ssh_runner.go:195] Run: which crictl
	I1209 02:36:54.005035  308375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:36:54.027746  308375 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:36:54.027816  308375 ssh_runner.go:195] Run: crio --version
	I1209 02:36:54.054240  308375 ssh_runner.go:195] Run: crio --version
	I1209 02:36:54.084627  308375 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 02:36:54.085773  308375 cli_runner.go:164] Run: docker network inspect newest-cni-828614 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:36:54.103481  308375 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1209 02:36:54.107526  308375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:36:54.119356  308375 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1209 02:36:50.572796  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	W1209 02:36:53.067437  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	I1209 02:36:54.120392  308375 kubeadm.go:884] updating cluster {Name:newest-cni-828614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-828614 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:36:54.120823  308375 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 02:36:54.120915  308375 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:36:54.153979  308375 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:36:54.153998  308375 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:36:54.154043  308375 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:36:54.179378  308375 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:36:54.179397  308375 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:36:54.179404  308375 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1209 02:36:54.179489  308375 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-828614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-828614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:36:54.179556  308375 ssh_runner.go:195] Run: crio config
	I1209 02:36:54.221864  308375 cni.go:84] Creating CNI manager for ""
	I1209 02:36:54.221885  308375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:36:54.221900  308375 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 02:36:54.221929  308375 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-828614 NodeName:newest-cni-828614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:36:54.222089  308375 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-828614"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:36:54.222161  308375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 02:36:54.229937  308375 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:36:54.230003  308375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:36:54.237334  308375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 02:36:54.250616  308375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 02:36:54.262054  308375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1209 02:36:54.273802  308375 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:36:54.277066  308375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:36:54.286161  308375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:54.366262  308375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:36:54.398686  308375 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614 for IP: 192.168.94.2
	I1209 02:36:54.398708  308375 certs.go:195] generating shared ca certs ...
	I1209 02:36:54.398727  308375 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:54.398882  308375 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:36:54.398948  308375 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:36:54.398960  308375 certs.go:257] generating profile certs ...
	I1209 02:36:54.399064  308375 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/client.key
	I1209 02:36:54.399153  308375 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/apiserver.key.9c61b522
	I1209 02:36:54.399207  308375 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/proxy-client.key
	I1209 02:36:54.399337  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:36:54.399377  308375 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:36:54.399392  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:36:54.399428  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:36:54.399466  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:36:54.399498  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:36:54.399570  308375 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:36:54.400383  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:36:54.421601  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:36:54.439931  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:36:54.458263  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:36:54.479090  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 02:36:54.498290  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:36:54.514180  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:36:54.530459  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/newest-cni-828614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 02:36:54.546342  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:36:54.562512  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:36:54.578803  308375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:36:54.596146  308375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:36:54.607525  308375 ssh_runner.go:195] Run: openssl version
	I1209 02:36:54.613378  308375 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:36:54.620031  308375 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:36:54.626742  308375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:36:54.630181  308375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:36:54.630227  308375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:36:54.665237  308375 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:36:54.672182  308375 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:54.679130  308375 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:36:54.685803  308375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:54.689274  308375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:54.689318  308375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:36:54.723355  308375 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:36:54.730229  308375 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:36:54.736899  308375 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:36:54.744500  308375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:36:54.748002  308375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:36:54.748048  308375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:36:54.784078  308375 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:36:54.791525  308375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:36:54.794957  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 02:36:54.828938  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 02:36:54.863678  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 02:36:54.900401  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 02:36:54.944820  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 02:36:54.989129  308375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 02:36:55.046308  308375 kubeadm.go:401] StartCluster: {Name:newest-cni-828614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-828614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:36:55.046409  308375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:36:55.046485  308375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:36:55.084504  308375 cri.go:89] found id: "e9824d0ad489e885ca6035cc8d85ec86ace8a8fc1d776a270c385e57035b610b"
	I1209 02:36:55.084527  308375 cri.go:89] found id: "be62dc59aed03890f3748125b25165b69fd841b9f8eec5a745af0ab6b12cc773"
	I1209 02:36:55.084533  308375 cri.go:89] found id: "c891247687a77ff07c3e1f24a0811997a68d0f14f1469fc95b261042e6cea86a"
	I1209 02:36:55.084538  308375 cri.go:89] found id: "53c463efbb58c5c4937d116abd49a98be2bbde6c807dd13b25656abd3d57a963"
	I1209 02:36:55.084543  308375 cri.go:89] found id: ""
	I1209 02:36:55.084586  308375 ssh_runner.go:195] Run: sudo runc list -f json
	W1209 02:36:55.097607  308375 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:36:55Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:36:55.097707  308375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:36:55.105895  308375 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 02:36:55.105917  308375 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 02:36:55.105970  308375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 02:36:55.113509  308375 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:36:55.114452  308375 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-828614" does not appear in /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:55.115079  308375 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-11001/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-828614" cluster setting kubeconfig missing "newest-cni-828614" context setting]
	I1209 02:36:55.115954  308375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:55.117911  308375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 02:36:55.125224  308375 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1209 02:36:55.125248  308375 kubeadm.go:602] duration metric: took 19.3251ms to restartPrimaryControlPlane
	I1209 02:36:55.125256  308375 kubeadm.go:403] duration metric: took 78.958734ms to StartCluster
	I1209 02:36:55.125270  308375 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:55.125325  308375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:36:55.127071  308375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:36:55.127273  308375 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:36:55.127340  308375 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 02:36:55.127442  308375 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-828614"
	I1209 02:36:55.127459  308375 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-828614"
	I1209 02:36:55.127466  308375 addons.go:70] Setting dashboard=true in profile "newest-cni-828614"
	I1209 02:36:55.127479  308375 addons.go:70] Setting default-storageclass=true in profile "newest-cni-828614"
	I1209 02:36:55.127496  308375 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-828614"
	I1209 02:36:55.127485  308375 addons.go:239] Setting addon dashboard=true in "newest-cni-828614"
	W1209 02:36:55.127510  308375 addons.go:248] addon dashboard should already be in state true
	I1209 02:36:55.127527  308375 config.go:182] Loaded profile config "newest-cni-828614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:36:55.127560  308375 host.go:66] Checking if "newest-cni-828614" exists ...
	W1209 02:36:55.127471  308375 addons.go:248] addon storage-provisioner should already be in state true
	I1209 02:36:55.127596  308375 host.go:66] Checking if "newest-cni-828614" exists ...
	I1209 02:36:55.127858  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:55.128042  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:55.128180  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:55.129550  308375 out.go:179] * Verifying Kubernetes components...
	I1209 02:36:55.134001  308375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:36:55.153176  308375 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 02:36:55.153748  308375 addons.go:239] Setting addon default-storageclass=true in "newest-cni-828614"
	W1209 02:36:55.153769  308375 addons.go:248] addon default-storageclass should already be in state true
	I1209 02:36:55.153793  308375 host.go:66] Checking if "newest-cni-828614" exists ...
	I1209 02:36:55.154229  308375 cli_runner.go:164] Run: docker container inspect newest-cni-828614 --format={{.State.Status}}
	I1209 02:36:55.155448  308375 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 02:36:55.155450  308375 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:36:55.156680  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 02:36:55.156699  308375 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 02:36:55.156748  308375 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:36:55.156770  308375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 02:36:55.156823  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:55.156752  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:55.182645  308375 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 02:36:55.182669  308375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 02:36:55.182730  308375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-828614
	I1209 02:36:55.190717  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:55.190729  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:55.215410  308375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/newest-cni-828614/id_rsa Username:docker}
	I1209 02:36:55.280264  308375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:36:55.293126  308375 api_server.go:52] waiting for apiserver process to appear ...
	I1209 02:36:55.293187  308375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:36:55.303074  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 02:36:55.303100  308375 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 02:36:55.304423  308375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:36:55.307166  308375 api_server.go:72] duration metric: took 179.865283ms to wait for apiserver process to appear ...
	I1209 02:36:55.307187  308375 api_server.go:88] waiting for apiserver healthz status ...
	I1209 02:36:55.307203  308375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:55.318294  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 02:36:55.318308  308375 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 02:36:55.320694  308375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 02:36:55.332795  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 02:36:55.332812  308375 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 02:36:55.346441  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 02:36:55.346463  308375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 02:36:55.360713  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 02:36:55.360735  308375 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 02:36:55.374221  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 02:36:55.374245  308375 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 02:36:55.387293  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 02:36:55.387310  308375 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 02:36:55.398678  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 02:36:55.398695  308375 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 02:36:55.410294  308375 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 02:36:55.410311  308375 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 02:36:55.422642  308375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 02:36:56.496770  308375 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 02:36:56.496797  308375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 02:36:56.496812  308375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:56.505930  308375 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 02:36:56.505954  308375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 02:36:56.807696  308375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:56.812091  308375 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 02:36:56.812119  308375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 02:36:57.009027  308375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.704572783s)
	I1209 02:36:57.009101  308375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.68838032s)
	I1209 02:36:57.009207  308375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.586520898s)
	I1209 02:36:57.010590  308375 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-828614 addons enable metrics-server
	
	I1209 02:36:57.019787  308375 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1209 02:36:52.634754  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:36:55.137694  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	I1209 02:36:57.020889  308375 addons.go:530] duration metric: took 1.893554234s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1209 02:36:57.307790  308375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:57.311714  308375 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 02:36:57.311737  308375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 02:36:57.807372  308375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:36:57.812497  308375 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1209 02:36:57.813588  308375 api_server.go:141] control plane version: v1.35.0-beta.0
	I1209 02:36:57.813611  308375 api_server.go:131] duration metric: took 2.506418126s to wait for apiserver health ...
	I1209 02:36:57.813618  308375 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 02:36:57.817421  308375 system_pods.go:59] 8 kube-system pods found
	I1209 02:36:57.817468  308375 system_pods.go:61] "coredns-7d764666f9-2gmfb" [07cf9a9f-2b91-4573-9b7e-a960d3bdbc45] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1209 02:36:57.817481  308375 system_pods.go:61] "etcd-newest-cni-828614" [b40c8743-bfbf-43e7-a4ad-3ae1cb4114e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:36:57.817496  308375 system_pods.go:61] "kindnet-fdwzs" [eca30b43-2f4e-4789-8909-c1b9da3b9569] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1209 02:36:57.817506  308375 system_pods.go:61] "kube-apiserver-newest-cni-828614" [12d6ff53-a8bd-4fa7-93ec-842147989244] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 02:36:57.817517  308375 system_pods.go:61] "kube-controller-manager-newest-cni-828614" [05280260-1034-4afd-8ff7-40b3acf1ef06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:36:57.817529  308375 system_pods.go:61] "kube-proxy-lh72l" [2042b849-e922-4790-9104-b640df5ee37b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 02:36:57.817537  308375 system_pods.go:61] "kube-scheduler-newest-cni-828614" [ff30f0c3-21ae-40f2-bcb4-9b54dfca1e19] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 02:36:57.817552  308375 system_pods.go:61] "storage-provisioner" [8ed7e008-713f-42f7-9e3b-83bd745a2ebd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1209 02:36:57.817564  308375 system_pods.go:74] duration metric: took 3.939257ms to wait for pod list to return data ...
	I1209 02:36:57.817576  308375 default_sa.go:34] waiting for default service account to be created ...
	I1209 02:36:57.819982  308375 default_sa.go:45] found service account: "default"
	I1209 02:36:57.820003  308375 default_sa.go:55] duration metric: took 2.421039ms for default service account to be created ...
	I1209 02:36:57.820014  308375 kubeadm.go:587] duration metric: took 2.692715106s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 02:36:57.820036  308375 node_conditions.go:102] verifying NodePressure condition ...
	I1209 02:36:57.822413  308375 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 02:36:57.822438  308375 node_conditions.go:123] node cpu capacity is 8
	I1209 02:36:57.822453  308375 node_conditions.go:105] duration metric: took 2.41156ms to run NodePressure ...
	I1209 02:36:57.822467  308375 start.go:242] waiting for startup goroutines ...
	I1209 02:36:57.822480  308375 start.go:247] waiting for cluster config update ...
	I1209 02:36:57.822496  308375 start.go:256] writing updated cluster config ...
	I1209 02:36:57.822759  308375 ssh_runner.go:195] Run: rm -f paused
	I1209 02:36:57.873611  308375 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1209 02:36:57.875301  308375 out.go:179] * Done! kubectl is now configured to use "newest-cni-828614" cluster and "default" namespace by default
	W1209 02:36:54.246366  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:36:56.746426  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:36:55.067946  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	W1209 02:36:57.566900  299506 pod_ready.go:104] pod "coredns-5dd5756b68-5d9gm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.774495495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.777235831Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=565673ab-a6a1-4f79-8d47-e335f2f1bb57 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.777835079Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=be1dc6ca-ad50-4f6d-8bfb-473613f10b5a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.778775528Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.77919307Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.779953645Z" level=info msg="Ran pod sandbox 147df6690838f376aec3cea43b3cc245559f0d868086885957cac5cefaa17d8b with infra container: kube-system/kindnet-fdwzs/POD" id=565673ab-a6a1-4f79-8d47-e335f2f1bb57 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.780678698Z" level=info msg="Ran pod sandbox 1acad2794d491ac3860220d479c79b87672fedbf1f780a4bd8e187372d6c83ad with infra container: kube-system/kube-proxy-lh72l/POD" id=be1dc6ca-ad50-4f6d-8bfb-473613f10b5a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.781902944Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=043846d3-2098-4e96-ae5f-99c7a3ced658 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.78207167Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=8a266551-a903-4475-a9e1-eaf6366abac6 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.782833145Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=83cca73b-e6d6-48fc-8adf-307dd1b36638 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.782923898Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ec726097-0ddf-4a34-a529-7fff85ceae92 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.783859242Z" level=info msg="Creating container: kube-system/kube-proxy-lh72l/kube-proxy" id=71d69aff-dfc9-4f32-b88f-8f35f17159f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.783965158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.784043415Z" level=info msg="Creating container: kube-system/kindnet-fdwzs/kindnet-cni" id=1e678e67-7591-478e-a13b-837c45907ef6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.78412392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.787861288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.788407758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.788429768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.788785466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.817175597Z" level=info msg="Created container 5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a: kube-system/kindnet-fdwzs/kindnet-cni" id=1e678e67-7591-478e-a13b-837c45907ef6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.817713753Z" level=info msg="Starting container: 5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a" id=8a8fe723-4cf5-44d3-b68a-ed03cbe62950 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.819670902Z" level=info msg="Started container" PID=1061 containerID=5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a description=kube-system/kindnet-fdwzs/kindnet-cni id=8a8fe723-4cf5-44d3-b68a-ed03cbe62950 name=/runtime.v1.RuntimeService/StartContainer sandboxID=147df6690838f376aec3cea43b3cc245559f0d868086885957cac5cefaa17d8b
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.820074766Z" level=info msg="Created container 647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef: kube-system/kube-proxy-lh72l/kube-proxy" id=71d69aff-dfc9-4f32-b88f-8f35f17159f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.820513406Z" level=info msg="Starting container: 647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef" id=93787ca3-8781-4244-861b-a9f5067909f2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:57 newest-cni-828614 crio[527]: time="2025-12-09T02:36:57.823723538Z" level=info msg="Started container" PID=1062 containerID=647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef description=kube-system/kube-proxy-lh72l/kube-proxy id=93787ca3-8781-4244-861b-a9f5067909f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1acad2794d491ac3860220d479c79b87672fedbf1f780a4bd8e187372d6c83ad
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5e99a414ba099       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   147df6690838f       kindnet-fdwzs                               kube-system
	647d83eb2b27a       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   1acad2794d491       kube-proxy-lh72l                            kube-system
	e9824d0ad489e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   caef6d07ee8bc       etcd-newest-cni-828614                      kube-system
	be62dc59aed03       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   850cff847a828       kube-scheduler-newest-cni-828614            kube-system
	c891247687a77       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   7e3e468c5e7d5       kube-controller-manager-newest-cni-828614   kube-system
	53c463efbb58c       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   7f414cb9490b1       kube-apiserver-newest-cni-828614            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-828614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-828614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=newest-cni-828614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_36_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:36:21 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-828614
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:36:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:36:56 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:36:56 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:36:56 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 09 Dec 2025 02:36:56 +0000   Tue, 09 Dec 2025 02:36:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-828614
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                693eaa58-e11a-4b63-aa70-2ba2e2c1dd88
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-828614                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-fdwzs                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-828614             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-newest-cni-828614    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-lh72l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-828614             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  34s   node-controller  Node newest-cni-828614 event: Registered Node newest-cni-828614 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-828614 event: Registered Node newest-cni-828614 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [e9824d0ad489e885ca6035cc8d85ec86ace8a8fc1d776a270c385e57035b610b] <==
	{"level":"warn","ts":"2025-12-09T02:36:55.924855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.933116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.939052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.946052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.952540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.959229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.965360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.973134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.979666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.986303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:55.997628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.003828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.010842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.017985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.024523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.032184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.038371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.044445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.051408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.057603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.070458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.076601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.083017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.091953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:56.132714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53754","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:37:02 up  1:19,  0 user,  load average: 3.69, 2.65, 1.89
	Linux newest-cni-828614 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e99a414ba099d4da26608f242b210d1b540b3cea80303220918ac6329516f1a] <==
	I1209 02:36:57.990470       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:36:57.990709       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1209 02:36:57.990815       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:36:57.990829       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:36:57.990846       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:36:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:36:58.286232       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:36:58.286267       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:36:58.286278       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:36:58.286528       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:36:58.786694       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:36:58.786733       1 metrics.go:72] Registering metrics
	I1209 02:36:58.786801       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [53c463efbb58c5c4937d116abd49a98be2bbde6c807dd13b25656abd3d57a963] <==
	I1209 02:36:56.579998       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 02:36:56.580074       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:56.580395       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:56.579783       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 02:36:56.580576       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 02:36:56.579797       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 02:36:56.580485       1 shared_informer.go:377] "Caches are synced"
	E1209 02:36:56.586698       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 02:36:56.587023       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 02:36:56.587204       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:36:56.608154       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:36:56.620011       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1209 02:36:56.826366       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:36:56.851057       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:36:56.865313       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:36:56.871415       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:36:56.876515       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:36:56.904722       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.31.4"}
	I1209 02:36:56.913539       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.159.98"}
	I1209 02:36:57.482565       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1209 02:37:00.233302       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:37:00.282850       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:37:00.383894       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:37:00.383894       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:37:00.433525       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [c891247687a77ff07c3e1f24a0811997a68d0f14f1469fc95b261042e6cea86a] <==
	I1209 02:36:59.747683       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.747882       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.748387       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.748900       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749223       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749480       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.748944       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749760       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749915       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749966       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749968       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.749995       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750029       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750078       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750094       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750135       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750227       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750285       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750415       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.750427       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.756246       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.841938       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.850276       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:59.850296       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:36:59.850302       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [647d83eb2b27adf38bb4295bd448c67f6e1d6142a0b221249db46213ecca25ef] <==
	I1209 02:36:57.858618       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:36:57.912157       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:58.012264       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:58.012296       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1209 02:36:58.012371       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:36:58.030321       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:36:58.030386       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:36:58.035281       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:36:58.035671       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:36:58.035712       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:58.036870       1 config.go:309] "Starting node config controller"
	I1209 02:36:58.036886       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:36:58.037073       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:36:58.037118       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:36:58.037194       1 config.go:200] "Starting service config controller"
	I1209 02:36:58.037205       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:36:58.038092       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:36:58.038122       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:36:58.137889       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:36:58.137921       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:36:58.138086       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:36:58.139252       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [be62dc59aed03890f3748125b25165b69fd841b9f8eec5a745af0ab6b12cc773] <==
	I1209 02:36:55.349698       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:36:56.512616       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:36:56.512841       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:36:56.512862       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:36:56.512872       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:36:56.532961       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1209 02:36:56.532996       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:56.535415       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:56.535456       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:56.535510       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:36:56.538078       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:36:56.636849       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: E1209 02:36:56.596226     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-828614\" already exists" pod="kube-system/kube-apiserver-newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.596260     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.598987     679 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.599079     679 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.599111     679 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.600099     679 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: E1209 02:36:56.603874     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-828614\" already exists" pod="kube-system/kube-controller-manager-newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: I1209 02:36:56.603903     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-828614"
	Dec 09 02:36:56 newest-cni-828614 kubelet[679]: E1209 02:36:56.610201     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-828614\" already exists" pod="kube-system/kube-scheduler-newest-cni-828614"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.465757     679 apiserver.go:52] "Watching apiserver"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.472211     679 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: E1209 02:36:57.502666     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-828614" containerName="kube-scheduler"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: E1209 02:36:57.502813     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-828614" containerName="kube-controller-manager"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: E1209 02:36:57.502907     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-828614" containerName="kube-apiserver"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: E1209 02:36:57.503063     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-828614" containerName="etcd"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.516918     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eca30b43-2f4e-4789-8909-c1b9da3b9569-lib-modules\") pod \"kindnet-fdwzs\" (UID: \"eca30b43-2f4e-4789-8909-c1b9da3b9569\") " pod="kube-system/kindnet-fdwzs"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.517040     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eca30b43-2f4e-4789-8909-c1b9da3b9569-cni-cfg\") pod \"kindnet-fdwzs\" (UID: \"eca30b43-2f4e-4789-8909-c1b9da3b9569\") " pod="kube-system/kindnet-fdwzs"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.517071     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2042b849-e922-4790-9104-b640df5ee37b-lib-modules\") pod \"kube-proxy-lh72l\" (UID: \"2042b849-e922-4790-9104-b640df5ee37b\") " pod="kube-system/kube-proxy-lh72l"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.517293     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eca30b43-2f4e-4789-8909-c1b9da3b9569-xtables-lock\") pod \"kindnet-fdwzs\" (UID: \"eca30b43-2f4e-4789-8909-c1b9da3b9569\") " pod="kube-system/kindnet-fdwzs"
	Dec 09 02:36:57 newest-cni-828614 kubelet[679]: I1209 02:36:57.517380     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2042b849-e922-4790-9104-b640df5ee37b-xtables-lock\") pod \"kube-proxy-lh72l\" (UID: \"2042b849-e922-4790-9104-b640df5ee37b\") " pod="kube-system/kube-proxy-lh72l"
	Dec 09 02:36:58 newest-cni-828614 kubelet[679]: E1209 02:36:58.507955     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-828614" containerName="kube-scheduler"
	Dec 09 02:36:58 newest-cni-828614 kubelet[679]: E1209 02:36:58.508091     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-828614" containerName="etcd"
	Dec 09 02:36:58 newest-cni-828614 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:36:58 newest-cni-828614 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:36:58 newest-cni-828614 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-828614 -n newest-cni-828614
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-828614 -n newest-cni-828614: exit status 2 (325.07049ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-828614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-2gmfb storage-provisioner dashboard-metrics-scraper-867fb5f87b-9fnkd kubernetes-dashboard-b84665fb8-kj4w9
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-828614 describe pod coredns-7d764666f9-2gmfb storage-provisioner dashboard-metrics-scraper-867fb5f87b-9fnkd kubernetes-dashboard-b84665fb8-kj4w9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-828614 describe pod coredns-7d764666f9-2gmfb storage-provisioner dashboard-metrics-scraper-867fb5f87b-9fnkd kubernetes-dashboard-b84665fb8-kj4w9: exit status 1 (59.502858ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-2gmfb" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-9fnkd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-kj4w9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-828614 describe pod coredns-7d764666f9-2gmfb storage-provisioner dashboard-metrics-scraper-867fb5f87b-9fnkd kubernetes-dashboard-b84665fb8-kj4w9: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-126117 --alsologtostderr -v=1
E1209 02:37:18.694490   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-126117 --alsologtostderr -v=1: exit status 80 (2.36951849s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-126117 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:37:16.733738  315303 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:37:16.733970  315303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:16.733978  315303 out.go:374] Setting ErrFile to fd 2...
	I1209 02:37:16.733982  315303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:16.734166  315303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:37:16.734360  315303 out.go:368] Setting JSON to false
	I1209 02:37:16.734376  315303 mustload.go:66] Loading cluster: old-k8s-version-126117
	I1209 02:37:16.734729  315303 config.go:182] Loaded profile config "old-k8s-version-126117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1209 02:37:16.735133  315303 cli_runner.go:164] Run: docker container inspect old-k8s-version-126117 --format={{.State.Status}}
	I1209 02:37:16.753917  315303 host.go:66] Checking if "old-k8s-version-126117" exists ...
	I1209 02:37:16.754217  315303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:16.815962  315303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-09 02:37:16.806431037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:16.816569  315303 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-126117 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1209 02:37:16.818596  315303 out.go:179] * Pausing node old-k8s-version-126117 ... 
	I1209 02:37:16.819870  315303 host.go:66] Checking if "old-k8s-version-126117" exists ...
	I1209 02:37:16.820129  315303 ssh_runner.go:195] Run: systemctl --version
	I1209 02:37:16.820189  315303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-126117
	I1209 02:37:16.837512  315303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/old-k8s-version-126117/id_rsa Username:docker}
	I1209 02:37:16.928812  315303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:16.940923  315303 pause.go:52] kubelet running: true
	I1209 02:37:16.940985  315303 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:37:17.104564  315303 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:37:17.104681  315303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:17.171450  315303 cri.go:89] found id: "c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595"
	I1209 02:37:17.171478  315303 cri.go:89] found id: "9cdb1dfdcfea40662105cf8fff8b3a41bfd59bed30fdb07bca9e68b99d1b7c53"
	I1209 02:37:17.171486  315303 cri.go:89] found id: "079fae7ab668695a5dc40dc342004525589e751567722848987ee9bdb98ffaa5"
	I1209 02:37:17.171492  315303 cri.go:89] found id: "c6b69e396ad3f3e4bce92baa0b1d59e69e9ad24edc6d95b4c3521edbbe8e9a6c"
	I1209 02:37:17.171496  315303 cri.go:89] found id: "22e7685929bf9235ea63b9e6dde43b2c40fd4f6c5864ffcc5f2d959a3e4469d6"
	I1209 02:37:17.171502  315303 cri.go:89] found id: "cd4f4b4fa3c59604fdb18dba3e4b3b8128da007c85eec89809b8c53268ac76cd"
	I1209 02:37:17.171506  315303 cri.go:89] found id: "7b6946b6f60bbbe8e9236ae337e00d48c56ddf19606d6f3a3492f3af5958f720"
	I1209 02:37:17.171511  315303 cri.go:89] found id: "5c61431ded03512f0b0b99ea3e143673f0cbf0844745ab6308ce619d683d312a"
	I1209 02:37:17.171523  315303 cri.go:89] found id: "a014d20dec589e1a973232c78daa628725af3a4e25a5ddd1fd633019a0917ac7"
	I1209 02:37:17.171535  315303 cri.go:89] found id: "37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7"
	I1209 02:37:17.171540  315303 cri.go:89] found id: "90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13"
	I1209 02:37:17.171545  315303 cri.go:89] found id: ""
	I1209 02:37:17.171583  315303 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:17.183615  315303 retry.go:31] will retry after 367.86072ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:17Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:37:17.552284  315303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:17.565326  315303 pause.go:52] kubelet running: false
	I1209 02:37:17.565383  315303 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:37:17.708454  315303 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:37:17.708534  315303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:17.778832  315303 cri.go:89] found id: "c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595"
	I1209 02:37:17.778856  315303 cri.go:89] found id: "9cdb1dfdcfea40662105cf8fff8b3a41bfd59bed30fdb07bca9e68b99d1b7c53"
	I1209 02:37:17.778862  315303 cri.go:89] found id: "079fae7ab668695a5dc40dc342004525589e751567722848987ee9bdb98ffaa5"
	I1209 02:37:17.778865  315303 cri.go:89] found id: "c6b69e396ad3f3e4bce92baa0b1d59e69e9ad24edc6d95b4c3521edbbe8e9a6c"
	I1209 02:37:17.778868  315303 cri.go:89] found id: "22e7685929bf9235ea63b9e6dde43b2c40fd4f6c5864ffcc5f2d959a3e4469d6"
	I1209 02:37:17.778871  315303 cri.go:89] found id: "cd4f4b4fa3c59604fdb18dba3e4b3b8128da007c85eec89809b8c53268ac76cd"
	I1209 02:37:17.778873  315303 cri.go:89] found id: "7b6946b6f60bbbe8e9236ae337e00d48c56ddf19606d6f3a3492f3af5958f720"
	I1209 02:37:17.778876  315303 cri.go:89] found id: "5c61431ded03512f0b0b99ea3e143673f0cbf0844745ab6308ce619d683d312a"
	I1209 02:37:17.778879  315303 cri.go:89] found id: "a014d20dec589e1a973232c78daa628725af3a4e25a5ddd1fd633019a0917ac7"
	I1209 02:37:17.778894  315303 cri.go:89] found id: "37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7"
	I1209 02:37:17.778898  315303 cri.go:89] found id: "90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13"
	I1209 02:37:17.778903  315303 cri.go:89] found id: ""
	I1209 02:37:17.778956  315303 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:17.790625  315303 retry.go:31] will retry after 277.172558ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:17Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:37:18.068151  315303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:18.081095  315303 pause.go:52] kubelet running: false
	I1209 02:37:18.081153  315303 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:37:18.234862  315303 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:37:18.234942  315303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:18.303814  315303 cri.go:89] found id: "c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595"
	I1209 02:37:18.303842  315303 cri.go:89] found id: "9cdb1dfdcfea40662105cf8fff8b3a41bfd59bed30fdb07bca9e68b99d1b7c53"
	I1209 02:37:18.303849  315303 cri.go:89] found id: "079fae7ab668695a5dc40dc342004525589e751567722848987ee9bdb98ffaa5"
	I1209 02:37:18.303854  315303 cri.go:89] found id: "c6b69e396ad3f3e4bce92baa0b1d59e69e9ad24edc6d95b4c3521edbbe8e9a6c"
	I1209 02:37:18.303859  315303 cri.go:89] found id: "22e7685929bf9235ea63b9e6dde43b2c40fd4f6c5864ffcc5f2d959a3e4469d6"
	I1209 02:37:18.303864  315303 cri.go:89] found id: "cd4f4b4fa3c59604fdb18dba3e4b3b8128da007c85eec89809b8c53268ac76cd"
	I1209 02:37:18.303869  315303 cri.go:89] found id: "7b6946b6f60bbbe8e9236ae337e00d48c56ddf19606d6f3a3492f3af5958f720"
	I1209 02:37:18.303873  315303 cri.go:89] found id: "5c61431ded03512f0b0b99ea3e143673f0cbf0844745ab6308ce619d683d312a"
	I1209 02:37:18.303877  315303 cri.go:89] found id: "a014d20dec589e1a973232c78daa628725af3a4e25a5ddd1fd633019a0917ac7"
	I1209 02:37:18.303886  315303 cri.go:89] found id: "37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7"
	I1209 02:37:18.303891  315303 cri.go:89] found id: "90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13"
	I1209 02:37:18.303896  315303 cri.go:89] found id: ""
	I1209 02:37:18.303942  315303 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:18.317343  315303 retry.go:31] will retry after 476.568082ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:18Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:37:18.794056  315303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:18.806711  315303 pause.go:52] kubelet running: false
	I1209 02:37:18.806774  315303 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:37:18.949899  315303 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:37:18.949978  315303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:19.019404  315303 cri.go:89] found id: "c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595"
	I1209 02:37:19.019427  315303 cri.go:89] found id: "9cdb1dfdcfea40662105cf8fff8b3a41bfd59bed30fdb07bca9e68b99d1b7c53"
	I1209 02:37:19.019432  315303 cri.go:89] found id: "079fae7ab668695a5dc40dc342004525589e751567722848987ee9bdb98ffaa5"
	I1209 02:37:19.019436  315303 cri.go:89] found id: "c6b69e396ad3f3e4bce92baa0b1d59e69e9ad24edc6d95b4c3521edbbe8e9a6c"
	I1209 02:37:19.019439  315303 cri.go:89] found id: "22e7685929bf9235ea63b9e6dde43b2c40fd4f6c5864ffcc5f2d959a3e4469d6"
	I1209 02:37:19.019443  315303 cri.go:89] found id: "cd4f4b4fa3c59604fdb18dba3e4b3b8128da007c85eec89809b8c53268ac76cd"
	I1209 02:37:19.019445  315303 cri.go:89] found id: "7b6946b6f60bbbe8e9236ae337e00d48c56ddf19606d6f3a3492f3af5958f720"
	I1209 02:37:19.019448  315303 cri.go:89] found id: "5c61431ded03512f0b0b99ea3e143673f0cbf0844745ab6308ce619d683d312a"
	I1209 02:37:19.019450  315303 cri.go:89] found id: "a014d20dec589e1a973232c78daa628725af3a4e25a5ddd1fd633019a0917ac7"
	I1209 02:37:19.019466  315303 cri.go:89] found id: "37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7"
	I1209 02:37:19.019470  315303 cri.go:89] found id: "90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13"
	I1209 02:37:19.019475  315303 cri.go:89] found id: ""
	I1209 02:37:19.019533  315303 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:19.033355  315303 out.go:203] 
	W1209 02:37:19.034662  315303 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 02:37:19.034686  315303 out.go:285] * 
	* 
	W1209 02:37:19.039248  315303 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 02:37:19.040750  315303 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-126117 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-126117
helpers_test.go:243: (dbg) docker inspect old-k8s-version-126117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4",
	        "Created": "2025-12-09T02:35:09.203047327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:36:19.406468752Z",
	            "FinishedAt": "2025-12-09T02:36:18.547079603Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/hostname",
	        "HostsPath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/hosts",
	        "LogPath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4-json.log",
	        "Name": "/old-k8s-version-126117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-126117:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-126117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4",
	                "LowerDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-126117",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-126117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-126117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-126117",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-126117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "25014d043c5cf19ace2963078af92f0a04a9eaf520664cd5c5dbe3824c991346",
	            "SandboxKey": "/var/run/docker/netns/25014d043c5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-126117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ecc05a83343c9bbe58006fef4c60d0178931361725a834370b23a8555dfe27ce",
	                    "EndpointID": "1cf22d63000af0cb7a5f71be2894d7df67a9bc9d184a63a90b67680bf8b56793",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "2e:ca:d0:39:43:27",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-126117",
	                        "fdb4a1a34663"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-126117 -n old-k8s-version-126117
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-126117 -n old-k8s-version-126117: exit status 2 (318.811244ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-126117 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-126117 logs -n 25: (1.056774248s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p cert-expiration-572052 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p old-k8s-version-126117 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p default-k8s-diff-port-512414 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ delete  │ -p cert-expiration-572052                                                                                                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p no-preload-185074 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-126117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-512414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p no-preload-185074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ stop    │ -p newest-cni-828614 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-828614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ image   │ newest-cni-828614 image list --format=json                                                                                                                                                                                                           │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ pause   │ -p newest-cni-828614 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p disable-driver-mounts-894253                                                                                                                                                                                                                      │ disable-driver-mounts-894253 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-485234           │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ old-k8s-version-126117 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p old-k8s-version-126117 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:37:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:37:06.265894  312861 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:37:06.266149  312861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:06.266159  312861 out.go:374] Setting ErrFile to fd 2...
	I1209 02:37:06.266163  312861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:06.266390  312861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:37:06.266890  312861 out.go:368] Setting JSON to false
	I1209 02:37:06.268011  312861 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4775,"bootTime":1765243051,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:37:06.268068  312861 start.go:143] virtualization: kvm guest
	I1209 02:37:06.269973  312861 out.go:179] * [embed-certs-485234] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:37:06.271239  312861 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:37:06.271260  312861 notify.go:221] Checking for updates...
	I1209 02:37:06.273331  312861 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:37:06.274481  312861 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:37:06.275572  312861 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:37:06.276773  312861 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:37:06.277728  312861 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:37:06.279204  312861 config.go:182] Loaded profile config "default-k8s-diff-port-512414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:06.279294  312861 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:37:06.279368  312861 config.go:182] Loaded profile config "old-k8s-version-126117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1209 02:37:06.279440  312861 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:37:06.303034  312861 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:37:06.303110  312861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:06.356600  312861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-09 02:37:06.347325006 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:06.356738  312861 docker.go:319] overlay module found
	I1209 02:37:06.359001  312861 out.go:179] * Using the docker driver based on user configuration
	I1209 02:37:06.359972  312861 start.go:309] selected driver: docker
	I1209 02:37:06.359986  312861 start.go:927] validating driver "docker" against <nil>
	I1209 02:37:06.360000  312861 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:37:06.360532  312861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:06.418200  312861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-09 02:37:06.408143545 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:06.418358  312861 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:37:06.418551  312861 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:37:06.419983  312861 out.go:179] * Using Docker driver with root privileges
	I1209 02:37:06.420941  312861 cni.go:84] Creating CNI manager for ""
	I1209 02:37:06.420995  312861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:06.421005  312861 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 02:37:06.421065  312861 start.go:353] cluster config:
	{Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:06.422178  312861 out.go:179] * Starting "embed-certs-485234" primary control-plane node in "embed-certs-485234" cluster
	I1209 02:37:06.423106  312861 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:37:06.424069  312861 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:37:06.424889  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:06.424931  312861 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:37:06.424943  312861 cache.go:65] Caching tarball of preloaded images
	I1209 02:37:06.424980  312861 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:37:06.425038  312861 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:37:06.425052  312861 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:37:06.425142  312861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json ...
	I1209 02:37:06.425166  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json: {Name:mk4ecce42013d99fe1ed5fecfa3a33c0e934834a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:06.444449  312861 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:37:06.444468  312861 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:37:06.444481  312861 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:37:06.444504  312861 start.go:360] acquireMachinesLock for embed-certs-485234: {Name:mk9b23f5c442a469a62d61ac899836b50beae7f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:37:06.444597  312861 start.go:364] duration metric: took 74.067µs to acquireMachinesLock for "embed-certs-485234"
	I1209 02:37:06.444619  312861 start.go:93] Provisioning new machine with config: &{Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:37:06.444720  312861 start.go:125] createHost starting for "" (driver="docker")
	W1209 02:37:02.634996  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:37:05.135565  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:37:05.746125  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:37:08.245123  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:37:07.633907  300341 pod_ready.go:94] pod "coredns-66bc5c9577-gtkkc" is "Ready"
	I1209 02:37:07.633932  300341 pod_ready.go:86] duration metric: took 34.504712821s for pod "coredns-66bc5c9577-gtkkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.636195  300341 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.639858  300341 pod_ready.go:94] pod "etcd-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.639883  300341 pod_ready.go:86] duration metric: took 3.667895ms for pod "etcd-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.641854  300341 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.645251  300341 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.645272  300341 pod_ready.go:86] duration metric: took 3.400654ms for pod "kube-apiserver-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.647046  300341 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.832888  300341 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.832916  300341 pod_ready.go:86] duration metric: took 185.849084ms for pod "kube-controller-manager-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.033001  300341 pod_ready.go:83] waiting for pod "kube-proxy-nkdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.433254  300341 pod_ready.go:94] pod "kube-proxy-nkdhm" is "Ready"
	I1209 02:37:08.433283  300341 pod_ready.go:86] duration metric: took 400.256248ms for pod "kube-proxy-nkdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.632462  300341 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:09.032519  300341 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:09.032544  300341 pod_ready.go:86] duration metric: took 400.052955ms for pod "kube-scheduler-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:09.032557  300341 pod_ready.go:40] duration metric: took 35.906617096s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:37:09.076201  300341 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 02:37:09.153412  300341 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-512414" cluster and "default" namespace by default
	I1209 02:37:06.446141  312861 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:37:06.446346  312861 start.go:159] libmachine.API.Create for "embed-certs-485234" (driver="docker")
	I1209 02:37:06.446376  312861 client.go:173] LocalClient.Create starting
	I1209 02:37:06.446433  312861 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:37:06.446463  312861 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:06.446481  312861 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:06.446530  312861 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:37:06.446551  312861 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:06.446560  312861 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:06.446913  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:37:06.462783  312861 cli_runner.go:211] docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:37:06.462837  312861 network_create.go:284] running [docker network inspect embed-certs-485234] to gather additional debugging logs...
	I1209 02:37:06.462851  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234
	W1209 02:37:06.477787  312861 cli_runner.go:211] docker network inspect embed-certs-485234 returned with exit code 1
	I1209 02:37:06.477816  312861 network_create.go:287] error running [docker network inspect embed-certs-485234]: docker network inspect embed-certs-485234: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-485234 not found
	I1209 02:37:06.477839  312861 network_create.go:289] output of [docker network inspect embed-certs-485234]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-485234 not found
	
	** /stderr **
	I1209 02:37:06.477923  312861 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:06.494719  312861 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:37:06.495379  312861 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:37:06.496115  312861 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:37:06.496652  312861 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e16439d105c6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:ee:5c:7c:6c:62} reservation:<nil>}
	I1209 02:37:06.497265  312861 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ecc05a83343c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:d2:77:3b:89:79} reservation:<nil>}
	I1209 02:37:06.498119  312861 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb0c90}
	I1209 02:37:06.498145  312861 network_create.go:124] attempt to create docker network embed-certs-485234 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1209 02:37:06.498186  312861 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-485234 embed-certs-485234
	I1209 02:37:06.545208  312861 network_create.go:108] docker network embed-certs-485234 192.168.94.0/24 created
	I1209 02:37:06.545234  312861 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-485234" container
	I1209 02:37:06.545311  312861 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:37:06.562656  312861 cli_runner.go:164] Run: docker volume create embed-certs-485234 --label name.minikube.sigs.k8s.io=embed-certs-485234 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:37:06.579351  312861 oci.go:103] Successfully created a docker volume embed-certs-485234
	I1209 02:37:06.579429  312861 cli_runner.go:164] Run: docker run --rm --name embed-certs-485234-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-485234 --entrypoint /usr/bin/test -v embed-certs-485234:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:37:06.968560  312861 oci.go:107] Successfully prepared a docker volume embed-certs-485234
	I1209 02:37:06.968678  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:06.968693  312861 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 02:37:06.968796  312861 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-485234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 02:37:10.828650  312861 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-485234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.859783742s)
	I1209 02:37:10.828684  312861 kic.go:203] duration metric: took 3.859986647s to extract preloaded images to volume ...
	W1209 02:37:10.828767  312861 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:37:10.828801  312861 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:37:10.828839  312861 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:37:10.885101  312861 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-485234 --name embed-certs-485234 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-485234 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-485234 --network embed-certs-485234 --ip 192.168.94.2 --volume embed-certs-485234:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 02:37:11.162572  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Running}}
	I1209 02:37:11.182739  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.201533  312861 cli_runner.go:164] Run: docker exec embed-certs-485234 stat /var/lib/dpkg/alternatives/iptables
	I1209 02:37:11.245603  312861 oci.go:144] the created container "embed-certs-485234" has a running status.
	I1209 02:37:11.245680  312861 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa...
	W1209 02:37:10.267075  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:37:12.746430  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:37:13.247465  302799 pod_ready.go:94] pod "coredns-7d764666f9-m6tbs" is "Ready"
	I1209 02:37:13.247521  302799 pod_ready.go:86] duration metric: took 34.507076064s for pod "coredns-7d764666f9-m6tbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.252380  302799 pod_ready.go:83] waiting for pod "etcd-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.257623  302799 pod_ready.go:94] pod "etcd-no-preload-185074" is "Ready"
	I1209 02:37:13.257682  302799 pod_ready.go:86] duration metric: took 5.27485ms for pod "etcd-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.259429  302799 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.263091  302799 pod_ready.go:94] pod "kube-apiserver-no-preload-185074" is "Ready"
	I1209 02:37:13.263117  302799 pod_ready.go:86] duration metric: took 3.670015ms for pod "kube-apiserver-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.264813  302799 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:11.537220  312861 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 02:37:11.563323  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.583790  312861 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 02:37:11.583816  312861 kic_runner.go:114] Args: [docker exec --privileged embed-certs-485234 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 02:37:11.626606  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.645123  312861 machine.go:94] provisionDockerMachine start ...
	I1209 02:37:11.645212  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.664460  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.664789  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.664805  312861 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:37:11.795359  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-485234
	
	I1209 02:37:11.795387  312861 ubuntu.go:182] provisioning hostname "embed-certs-485234"
	I1209 02:37:11.795448  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.814229  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.814492  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.814514  312861 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-485234 && echo "embed-certs-485234" | sudo tee /etc/hostname
	I1209 02:37:11.948171  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-485234
	
	I1209 02:37:11.948244  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.966144  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.966365  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.966384  312861 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-485234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-485234/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-485234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:37:12.090842  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:37:12.090872  312861 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:37:12.090923  312861 ubuntu.go:190] setting up certificates
	I1209 02:37:12.090933  312861 provision.go:84] configureAuth start
	I1209 02:37:12.090984  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.108441  312861 provision.go:143] copyHostCerts
	I1209 02:37:12.108498  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:37:12.108513  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:37:12.108581  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:37:12.108718  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:37:12.108731  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:37:12.108780  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:37:12.108915  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:37:12.108926  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:37:12.108962  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:37:12.109046  312861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.embed-certs-485234 san=[127.0.0.1 192.168.94.2 embed-certs-485234 localhost minikube]
	I1209 02:37:12.185770  312861 provision.go:177] copyRemoteCerts
	I1209 02:37:12.185823  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:37:12.185867  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.203781  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.297266  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:37:12.315682  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 02:37:12.332372  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 02:37:12.348767  312861 provision.go:87] duration metric: took 257.824432ms to configureAuth
	I1209 02:37:12.348791  312861 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:37:12.348966  312861 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:12.349051  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.367892  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:12.368130  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:12.368152  312861 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:37:12.631127  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:37:12.631150  312861 machine.go:97] duration metric: took 986.000884ms to provisionDockerMachine
	I1209 02:37:12.631160  312861 client.go:176] duration metric: took 6.184776828s to LocalClient.Create
	I1209 02:37:12.631178  312861 start.go:167] duration metric: took 6.184833791s to libmachine.API.Create "embed-certs-485234"
	I1209 02:37:12.631185  312861 start.go:293] postStartSetup for "embed-certs-485234" (driver="docker")
	I1209 02:37:12.631193  312861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:37:12.631247  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:37:12.631288  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.650047  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.745621  312861 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:37:12.749630  312861 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:37:12.749691  312861 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:37:12.749704  312861 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:37:12.749756  312861 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:37:12.749822  312861 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:37:12.749906  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:37:12.758040  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:12.779782  312861 start.go:296] duration metric: took 148.5859ms for postStartSetup
	I1209 02:37:12.780088  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.798780  312861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json ...
	I1209 02:37:12.799048  312861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:37:12.799087  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.816209  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.906142  312861 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:37:12.910519  312861 start.go:128] duration metric: took 6.465788374s to createHost
	I1209 02:37:12.910538  312861 start.go:83] releasing machines lock for "embed-certs-485234", held for 6.465929672s
	I1209 02:37:12.910606  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.928304  312861 ssh_runner.go:195] Run: cat /version.json
	I1209 02:37:12.928356  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.928375  312861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:37:12.928447  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.946358  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.946972  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:13.091177  312861 ssh_runner.go:195] Run: systemctl --version
	I1209 02:37:13.097600  312861 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:37:13.131258  312861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:37:13.135743  312861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:37:13.135810  312861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:37:13.162689  312861 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 02:37:13.162715  312861 start.go:496] detecting cgroup driver to use...
	I1209 02:37:13.162750  312861 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:37:13.162798  312861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:37:13.178717  312861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:37:13.190805  312861 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:37:13.190853  312861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:37:13.206264  312861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:37:13.222864  312861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:37:13.305814  312861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:37:13.390556  312861 docker.go:234] disabling docker service ...
	I1209 02:37:13.390674  312861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:37:13.409495  312861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:37:13.422267  312861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:37:13.506320  312861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:37:13.589113  312861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:37:13.600697  312861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:37:13.614485  312861 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:37:13.614532  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.624541  312861 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:37:13.624587  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.633049  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.641219  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.650011  312861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:37:13.657733  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.665900  312861 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.678728  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.686933  312861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:37:13.693823  312861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:37:13.700444  312861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:13.779960  312861 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:37:13.910038  312861 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:37:13.910103  312861 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:37:13.914205  312861 start.go:564] Will wait 60s for crictl version
	I1209 02:37:13.914265  312861 ssh_runner.go:195] Run: which crictl
	I1209 02:37:13.917709  312861 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:37:13.941238  312861 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:37:13.941311  312861 ssh_runner.go:195] Run: crio --version
	I1209 02:37:13.969399  312861 ssh_runner.go:195] Run: crio --version
	I1209 02:37:13.997525  312861 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 02:37:13.444584  302799 pod_ready.go:94] pod "kube-controller-manager-no-preload-185074" is "Ready"
	I1209 02:37:13.444613  302799 pod_ready.go:86] duration metric: took 179.781521ms for pod "kube-controller-manager-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.644581  302799 pod_ready.go:83] waiting for pod "kube-proxy-8jh88" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.044726  302799 pod_ready.go:94] pod "kube-proxy-8jh88" is "Ready"
	I1209 02:37:14.044754  302799 pod_ready.go:86] duration metric: took 400.15086ms for pod "kube-proxy-8jh88" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.243839  302799 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.644301  302799 pod_ready.go:94] pod "kube-scheduler-no-preload-185074" is "Ready"
	I1209 02:37:14.644322  302799 pod_ready.go:86] duration metric: took 400.457904ms for pod "kube-scheduler-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.644333  302799 pod_ready.go:40] duration metric: took 35.907468936s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:37:14.691366  302799 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1209 02:37:14.693696  302799 out.go:179] * Done! kubectl is now configured to use "no-preload-185074" cluster and "default" namespace by default
	I1209 02:37:13.998454  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:14.015735  312861 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1209 02:37:14.019587  312861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:14.029452  312861 kubeadm.go:884] updating cluster {Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:37:14.029561  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:14.029613  312861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:14.062629  312861 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:14.062664  312861 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:37:14.062704  312861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:14.087930  312861 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:14.087950  312861 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:37:14.087958  312861 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1209 02:37:14.088051  312861 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-485234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:37:14.088114  312861 ssh_runner.go:195] Run: crio config
	I1209 02:37:14.133509  312861 cni.go:84] Creating CNI manager for ""
	I1209 02:37:14.133535  312861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:14.133556  312861 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:37:14.133578  312861 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-485234 NodeName:embed-certs-485234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:37:14.133735  312861 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-485234"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:37:14.133794  312861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 02:37:14.141697  312861 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:37:14.141757  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:37:14.149416  312861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1209 02:37:14.162206  312861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:37:14.177373  312861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1209 02:37:14.189424  312861 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:37:14.192881  312861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:14.201952  312861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:14.282853  312861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:37:14.304730  312861 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234 for IP: 192.168.94.2
	I1209 02:37:14.304752  312861 certs.go:195] generating shared ca certs ...
	I1209 02:37:14.304774  312861 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.304940  312861 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:37:14.305016  312861 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:37:14.305033  312861 certs.go:257] generating profile certs ...
	I1209 02:37:14.305100  312861 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key
	I1209 02:37:14.305120  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt with IP's: []
	I1209 02:37:14.359436  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt ...
	I1209 02:37:14.359461  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt: {Name:mkd2687220e2c1a496f0919e5b4ee3ae985b0d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.359653  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key ...
	I1209 02:37:14.359668  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key: {Name:mk9eda0520f2cbbe6316507c37cd6f28fc511268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.359822  312861 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20
	I1209 02:37:14.359847  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1209 02:37:14.444770  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 ...
	I1209 02:37:14.444793  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20: {Name:mk94bd2fac7c7e957c0ee327319c5c1e8a6301f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.444968  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20 ...
	I1209 02:37:14.444991  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20: {Name:mkacd03a1ebe1fb35635f22c6c191b2975875de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.445113  312861 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt
	I1209 02:37:14.445190  312861 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key
	I1209 02:37:14.445244  312861 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key
	I1209 02:37:14.445259  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt with IP's: []
	I1209 02:37:14.560806  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt ...
	I1209 02:37:14.560826  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt: {Name:mke7ad5eda062e0b1092e0004408a09aa647aeea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.560983  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key ...
	I1209 02:37:14.561002  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key: {Name:mk93c4daac2f0f9d1f8c2f6e132f0bae11b524ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.561200  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:37:14.561241  312861 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:37:14.561252  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:37:14.561274  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:37:14.561307  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:37:14.561340  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:37:14.561405  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:14.561980  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:37:14.580295  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:37:14.597083  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:37:14.613685  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:37:14.630255  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 02:37:14.648077  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:37:14.666598  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:37:14.683845  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:37:14.701559  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:37:14.724314  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:37:14.741496  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:37:14.760427  312861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:37:14.773786  312861 ssh_runner.go:195] Run: openssl version
	I1209 02:37:14.779710  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.787281  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:37:14.795901  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.799927  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.799992  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.839135  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:14.847352  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145522.pem /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:14.854769  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.861800  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:37:14.869148  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.872807  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.872857  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.906788  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:37:14.913728  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 02:37:14.920733  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.928244  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:37:14.935526  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.939120  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.939164  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.983518  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:37:14.991697  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/51391683.0
	I1209 02:37:15.000864  312861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:37:15.005011  312861 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 02:37:15.005053  312861 kubeadm.go:401] StartCluster: {Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:15.005116  312861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:37:15.005173  312861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:37:15.035472  312861 cri.go:89] found id: ""
	I1209 02:37:15.035518  312861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:37:15.045322  312861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 02:37:15.053145  312861 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 02:37:15.053203  312861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 02:37:15.061178  312861 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 02:37:15.061197  312861 kubeadm.go:158] found existing configuration files:
	
	I1209 02:37:15.061235  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 02:37:15.068770  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 02:37:15.068824  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 02:37:15.075842  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 02:37:15.083627  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 02:37:15.083711  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 02:37:15.091022  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 02:37:15.098306  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 02:37:15.098366  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 02:37:15.105103  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 02:37:15.112368  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 02:37:15.112418  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 02:37:15.119369  312861 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 02:37:15.155406  312861 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 02:37:15.155454  312861 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 02:37:15.189920  312861 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 02:37:15.190010  312861 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 02:37:15.190083  312861 kubeadm.go:319] OS: Linux
	I1209 02:37:15.190144  312861 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 02:37:15.190210  312861 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 02:37:15.190296  312861 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 02:37:15.190379  312861 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 02:37:15.190454  312861 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 02:37:15.190527  312861 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 02:37:15.190604  312861 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 02:37:15.190702  312861 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 02:37:15.249252  312861 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 02:37:15.249405  312861 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 02:37:15.249583  312861 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 02:37:15.256114  312861 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 02:37:15.259205  312861 out.go:252]   - Generating certificates and keys ...
	I1209 02:37:15.259301  312861 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 02:37:15.259380  312861 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 02:37:15.555393  312861 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 02:37:15.791444  312861 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 02:37:16.204198  312861 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	
	
	==> CRI-O <==
	Dec 09 02:36:48 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:48.481115522Z" level=info msg="Created container 90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rc6b/kubernetes-dashboard" id=6bb81f12-8346-406a-925d-83edb2a52e2b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:48 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:48.481955644Z" level=info msg="Starting container: 90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13" id=6122ed9d-3022-4652-ba72-bdfea2e81c86 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:48 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:48.484136916Z" level=info msg="Started container" PID=1714 containerID=90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rc6b/kubernetes-dashboard id=6122ed9d-3022-4652-ba72-bdfea2e81c86 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f58e3c0b0419a0c5bdd47f5f6f05d518d2d1e78ac2f7a1472e59956186d9b8fa
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.538695066Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e16b0347-8429-4c68-bf68-cf9b11d217df name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.539560735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=505470f5-f79d-4c4d-9b81-358eb1cb1456 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.540548341Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4c9c4c22-b258-40d2-9b6e-ea76f9789b06 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.540726958Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.544687548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.544868938Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0e802140fa546f8886723083ca856e743a4267ac19f5456d7a9cf438f3365f3e/merged/etc/passwd: no such file or directory"
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.544897321Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0e802140fa546f8886723083ca856e743a4267ac19f5456d7a9cf438f3365f3e/merged/etc/group: no such file or directory"
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.545183705Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.573707275Z" level=info msg="Created container c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595: kube-system/storage-provisioner/storage-provisioner" id=4c9c4c22-b258-40d2-9b6e-ea76f9789b06 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.574211442Z" level=info msg="Starting container: c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595" id=1ae2d797-6a19-488a-b488-110bfd7f8c42 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.575966757Z" level=info msg="Started container" PID=1738 containerID=c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595 description=kube-system/storage-provisioner/storage-provisioner id=1ae2d797-6a19-488a-b488-110bfd7f8c42 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c5b228a1c60fc802d3d4b51f123de42c340fce43b13fb117bd454ba38c1b9184
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.419528482Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ff4da2d3-4e41-4818-94de-89f2d5d98b6b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.420573571Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=42fe3f74-90f0-49c0-8a43-5549b52956ad name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.421574766Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc/dashboard-metrics-scraper" id=2768de4e-64a2-41ba-80c5-65d55e74c2a6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.421722781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.427934355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.428667694Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.466393218Z" level=info msg="Created container 37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc/dashboard-metrics-scraper" id=2768de4e-64a2-41ba-80c5-65d55e74c2a6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.467002708Z" level=info msg="Starting container: 37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7" id=d9348085-2e45-4959-9e3f-1a5be2fa3bbb name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.469204383Z" level=info msg="Started container" PID=1753 containerID=37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc/dashboard-metrics-scraper id=d9348085-2e45-4959-9e3f-1a5be2fa3bbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=b3d1d3361e84b6f2399e8b45d89e6641877b26ac1ff079199e4d2654e1b3e2e8
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.552401108Z" level=info msg="Removing container: 26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283" id=d62127d3-ed65-403e-90cc-35d6e4415b67 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.563215153Z" level=info msg="Removed container 26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc/dashboard-metrics-scraper" id=d62127d3-ed65-403e-90cc-35d6e4415b67 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	37c28b22b7b48       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   b3d1d3361e84b       dashboard-metrics-scraper-5f989dc9cf-bd6dc       kubernetes-dashboard
	c3513e6f3e957       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   c5b228a1c60fc       storage-provisioner                              kube-system
	90f9e969d62ef       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   f58e3c0b0419a       kubernetes-dashboard-8694d4445c-5rc6b            kubernetes-dashboard
	9cdb1dfdcfea4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   85f51789d3825       coredns-5dd5756b68-5d9gm                         kube-system
	097c72a753fd3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   55f5c04accafb       busybox                                          default
	079fae7ab6686       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   074bc5aa1371c       kube-proxy-xjvf6                                 kube-system
	c6b69e396ad3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   c5b228a1c60fc       storage-provisioner                              kube-system
	22e7685929bf9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   c78a1761c7bcd       kindnet-xk6zs                                    kube-system
	cd4f4b4fa3c59       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   b7edf220de050       kube-apiserver-old-k8s-version-126117            kube-system
	7b6946b6f60bb       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   1e841a76c71c7       kube-controller-manager-old-k8s-version-126117   kube-system
	5c61431ded035       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   f31dfa5f7ddba       etcd-old-k8s-version-126117                      kube-system
	a014d20dec589       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   a4c33924dc17f       kube-scheduler-old-k8s-version-126117            kube-system
	
	
	==> coredns [9cdb1dfdcfea40662105cf8fff8b3a41bfd59bed30fdb07bca9e68b99d1b7c53] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57248 - 31912 "HINFO IN 6151329663776760756.4907000566883293878. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.447825139s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-126117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-126117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=old-k8s-version-126117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:35:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-126117
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:37:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:36:58 +0000   Tue, 09 Dec 2025 02:35:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:36:58 +0000   Tue, 09 Dec 2025 02:35:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:36:58 +0000   Tue, 09 Dec 2025 02:35:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:36:58 +0000   Tue, 09 Dec 2025 02:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-126117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                fe5af2e7-907b-43f5-907c-9c3129342d44
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-5d9gm                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-old-k8s-version-126117                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-xk6zs                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-126117             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-126117    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-xjvf6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-126117             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-bd6dc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5rc6b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node old-k8s-version-126117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node old-k8s-version-126117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node old-k8s-version-126117 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node old-k8s-version-126117 event: Registered Node old-k8s-version-126117 in Controller
	  Normal  NodeReady                90s                kubelet          Node old-k8s-version-126117 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node old-k8s-version-126117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node old-k8s-version-126117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node old-k8s-version-126117 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node old-k8s-version-126117 event: Registered Node old-k8s-version-126117 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [5c61431ded03512f0b0b99ea3e143673f0cbf0844745ab6308ce619d683d312a] <==
	{"level":"info","ts":"2025-12-09T02:36:25.981047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-09T02:36:25.981133Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-09T02:36:25.981285Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-09T02:36:25.981325Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-09T02:36:25.984263Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-09T02:36:25.984454Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-09T02:36:25.984475Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-09T02:36:25.984563Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-09T02:36:25.984568Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-09T02:36:26.971294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-09T02:36:26.971345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-09T02:36:26.971393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-09T02:36:26.971413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-09T02:36:26.971421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-09T02:36:26.971433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-09T02:36:26.971444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-09T02:36:26.972285Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-126117 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-09T02:36:26.972296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-09T02:36:26.972329Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-09T02:36:26.972537Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-09T02:36:26.972625Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-09T02:36:26.973871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-09T02:36:26.973932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-09T02:36:45.618448Z","caller":"traceutil/trace.go:171","msg":"trace[517070542] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"106.017576ms","start":"2025-12-09T02:36:45.512407Z","end":"2025-12-09T02:36:45.618425Z","steps":["trace[517070542] 'process raft request'  (duration: 105.489953ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:36:46.643499Z","caller":"traceutil/trace.go:171","msg":"trace[1318274902] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"133.047374ms","start":"2025-12-09T02:36:46.510425Z","end":"2025-12-09T02:36:46.643473Z","steps":["trace[1318274902] 'process raft request'  (duration: 108.676105ms)","trace[1318274902] 'compare'  (duration: 24.216513ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:37:20 up  1:19,  0 user,  load average: 3.00, 2.56, 1.87
	Linux old-k8s-version-126117 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [22e7685929bf9235ea63b9e6dde43b2c40fd4f6c5864ffcc5f2d959a3e4469d6] <==
	I1209 02:36:29.031413       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:36:29.031876       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1209 02:36:29.032105       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:36:29.032129       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:36:29.032153       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:36:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:36:29.319947       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:36:29.327474       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:36:29.327497       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:36:29.328675       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:36:29.827738       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:36:29.827781       1 metrics.go:72] Registering metrics
	I1209 02:36:29.827850       1 controller.go:711] "Syncing nftables rules"
	I1209 02:36:39.320343       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:36:39.320391       1 main.go:301] handling current node
	I1209 02:36:49.320677       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:36:49.320733       1 main.go:301] handling current node
	I1209 02:36:59.319943       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:36:59.319981       1 main.go:301] handling current node
	I1209 02:37:09.320917       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:37:09.320950       1 main.go:301] handling current node
	I1209 02:37:19.320984       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:37:19.321025       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cd4f4b4fa3c59604fdb18dba3e4b3b8128da007c85eec89809b8c53268ac76cd] <==
	I1209 02:36:28.030569       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1209 02:36:28.120523       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1209 02:36:28.120916       1 shared_informer.go:318] Caches are synced for configmaps
	I1209 02:36:28.121134       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1209 02:36:28.122110       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1209 02:36:28.122133       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1209 02:36:28.120536       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1209 02:36:28.141784       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1209 02:36:28.141923       1 aggregator.go:166] initial CRD sync complete...
	I1209 02:36:28.141956       1 autoregister_controller.go:141] Starting autoregister controller
	I1209 02:36:28.142096       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 02:36:28.142130       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:36:28.154240       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1209 02:36:28.173239       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:36:29.023205       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:36:29.343409       1 controller.go:624] quota admission added evaluator for: namespaces
	I1209 02:36:29.390379       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1209 02:36:29.423521       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:36:29.433602       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:36:29.444948       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1209 02:36:29.497603       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.183.225"}
	I1209 02:36:29.510997       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.12.224"}
	I1209 02:36:40.802658       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1209 02:36:40.822707       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:36:40.823021       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7b6946b6f60bbbe8e9236ae337e00d48c56ddf19606d6f3a3492f3af5958f720] <==
	I1209 02:36:40.841046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="31.783032ms"
	I1209 02:36:40.841207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.964043ms"
	I1209 02:36:40.845214       1 shared_informer.go:318] Caches are synced for PVC protection
	I1209 02:36:40.851297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.021945ms"
	I1209 02:36:40.851486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.284µs"
	I1209 02:36:40.854018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.684855ms"
	I1209 02:36:40.854112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.725µs"
	I1209 02:36:40.857924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="82.678µs"
	I1209 02:36:40.867161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.906µs"
	I1209 02:36:40.894896       1 shared_informer.go:318] Caches are synced for cronjob
	I1209 02:36:40.902378       1 shared_informer.go:318] Caches are synced for resource quota
	I1209 02:36:40.914226       1 shared_informer.go:318] Caches are synced for resource quota
	I1209 02:36:40.959660       1 shared_informer.go:318] Caches are synced for persistent volume
	I1209 02:36:41.327176       1 shared_informer.go:318] Caches are synced for garbage collector
	I1209 02:36:41.358525       1 shared_informer.go:318] Caches are synced for garbage collector
	I1209 02:36:41.358561       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1209 02:36:44.512134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.821µs"
	I1209 02:36:45.619998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.887µs"
	I1209 02:36:46.645283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.069µs"
	I1209 02:36:48.528232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.410039ms"
	I1209 02:36:48.528321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.019µs"
	I1209 02:37:02.565998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.775µs"
	I1209 02:37:03.107696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.615886ms"
	I1209 02:37:03.107803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.541µs"
	I1209 02:37:11.153226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.044µs"
	
	
	==> kube-proxy [079fae7ab668695a5dc40dc342004525589e751567722848987ee9bdb98ffaa5] <==
	I1209 02:36:28.887976       1 server_others.go:69] "Using iptables proxy"
	I1209 02:36:28.903022       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1209 02:36:28.924519       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:36:28.927792       1 server_others.go:152] "Using iptables Proxier"
	I1209 02:36:28.927844       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1209 02:36:28.927871       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1209 02:36:28.927908       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1209 02:36:28.930755       1 server.go:846] "Version info" version="v1.28.0"
	I1209 02:36:28.930905       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:28.931797       1 config.go:188] "Starting service config controller"
	I1209 02:36:28.933253       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1209 02:36:28.932404       1 config.go:97] "Starting endpoint slice config controller"
	I1209 02:36:28.933369       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1209 02:36:28.932787       1 config.go:315] "Starting node config controller"
	I1209 02:36:28.933438       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1209 02:36:29.033532       1 shared_informer.go:318] Caches are synced for node config
	I1209 02:36:29.033669       1 shared_informer.go:318] Caches are synced for service config
	I1209 02:36:29.033709       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a014d20dec589e1a973232c78daa628725af3a4e25a5ddd1fd633019a0917ac7] <==
	I1209 02:36:26.387945       1 serving.go:348] Generated self-signed cert in-memory
	W1209 02:36:28.065528       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:36:28.065567       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:36:28.065585       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:36:28.065599       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:36:28.094707       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1209 02:36:28.094736       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:28.096239       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:28.096792       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 02:36:28.097162       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1209 02:36:28.097284       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1209 02:36:28.198217       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 02:36:40 old-k8s-version-126117 kubelet[728]: I1209 02:36:40.840508     728 topology_manager.go:215] "Topology Admit Handler" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-bd6dc"
	Dec 09 02:36:40 old-k8s-version-126117 kubelet[728]: I1209 02:36:40.955330     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4c6cd675-cc90-4ada-a2b0-7f4c03ef7b3a-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-5rc6b\" (UID: \"4c6cd675-cc90-4ada-a2b0-7f4c03ef7b3a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rc6b"
	Dec 09 02:36:40 old-k8s-version-126117 kubelet[728]: I1209 02:36:40.955606     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6166aa42-3f63-4436-b48c-c2a876ef76a1-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-bd6dc\" (UID: \"6166aa42-3f63-4436-b48c-c2a876ef76a1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc"
	Dec 09 02:36:40 old-k8s-version-126117 kubelet[728]: I1209 02:36:40.955693     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzkgb\" (UniqueName: \"kubernetes.io/projected/6166aa42-3f63-4436-b48c-c2a876ef76a1-kube-api-access-rzkgb\") pod \"dashboard-metrics-scraper-5f989dc9cf-bd6dc\" (UID: \"6166aa42-3f63-4436-b48c-c2a876ef76a1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc"
	Dec 09 02:36:40 old-k8s-version-126117 kubelet[728]: I1209 02:36:40.955806     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98gzc\" (UniqueName: \"kubernetes.io/projected/4c6cd675-cc90-4ada-a2b0-7f4c03ef7b3a-kube-api-access-98gzc\") pod \"kubernetes-dashboard-8694d4445c-5rc6b\" (UID: \"4c6cd675-cc90-4ada-a2b0-7f4c03ef7b3a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rc6b"
	Dec 09 02:36:44 old-k8s-version-126117 kubelet[728]: I1209 02:36:44.494157     728 scope.go:117] "RemoveContainer" containerID="dea6fd16bceb91616c8ca5c9398b5abfea11227ab50af30d98cea266a3878316"
	Dec 09 02:36:45 old-k8s-version-126117 kubelet[728]: I1209 02:36:45.499021     728 scope.go:117] "RemoveContainer" containerID="dea6fd16bceb91616c8ca5c9398b5abfea11227ab50af30d98cea266a3878316"
	Dec 09 02:36:45 old-k8s-version-126117 kubelet[728]: I1209 02:36:45.499499     728 scope.go:117] "RemoveContainer" containerID="26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283"
	Dec 09 02:36:45 old-k8s-version-126117 kubelet[728]: E1209 02:36:45.500127     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bd6dc_kubernetes-dashboard(6166aa42-3f63-4436-b48c-c2a876ef76a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1"
	Dec 09 02:36:46 old-k8s-version-126117 kubelet[728]: I1209 02:36:46.502627     728 scope.go:117] "RemoveContainer" containerID="26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283"
	Dec 09 02:36:46 old-k8s-version-126117 kubelet[728]: E1209 02:36:46.503089     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bd6dc_kubernetes-dashboard(6166aa42-3f63-4436-b48c-c2a876ef76a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1"
	Dec 09 02:36:48 old-k8s-version-126117 kubelet[728]: I1209 02:36:48.520873     728 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rc6b" podStartSLOduration=1.251732233 podCreationTimestamp="2025-12-09 02:36:40 +0000 UTC" firstStartedPulling="2025-12-09 02:36:41.167508085 +0000 UTC m=+15.833666288" lastFinishedPulling="2025-12-09 02:36:48.436580502 +0000 UTC m=+23.102738714" observedRunningTime="2025-12-09 02:36:48.520571545 +0000 UTC m=+23.186729764" watchObservedRunningTime="2025-12-09 02:36:48.520804659 +0000 UTC m=+23.186962878"
	Dec 09 02:36:51 old-k8s-version-126117 kubelet[728]: I1209 02:36:51.142810     728 scope.go:117] "RemoveContainer" containerID="26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283"
	Dec 09 02:36:51 old-k8s-version-126117 kubelet[728]: E1209 02:36:51.143072     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bd6dc_kubernetes-dashboard(6166aa42-3f63-4436-b48c-c2a876ef76a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1"
	Dec 09 02:36:59 old-k8s-version-126117 kubelet[728]: I1209 02:36:59.538309     728 scope.go:117] "RemoveContainer" containerID="c6b69e396ad3f3e4bce92baa0b1d59e69e9ad24edc6d95b4c3521edbbe8e9a6c"
	Dec 09 02:37:02 old-k8s-version-126117 kubelet[728]: I1209 02:37:02.418971     728 scope.go:117] "RemoveContainer" containerID="26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283"
	Dec 09 02:37:02 old-k8s-version-126117 kubelet[728]: I1209 02:37:02.551145     728 scope.go:117] "RemoveContainer" containerID="26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283"
	Dec 09 02:37:02 old-k8s-version-126117 kubelet[728]: I1209 02:37:02.551367     728 scope.go:117] "RemoveContainer" containerID="37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7"
	Dec 09 02:37:02 old-k8s-version-126117 kubelet[728]: E1209 02:37:02.551783     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bd6dc_kubernetes-dashboard(6166aa42-3f63-4436-b48c-c2a876ef76a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1"
	Dec 09 02:37:11 old-k8s-version-126117 kubelet[728]: I1209 02:37:11.143130     728 scope.go:117] "RemoveContainer" containerID="37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7"
	Dec 09 02:37:11 old-k8s-version-126117 kubelet[728]: E1209 02:37:11.143671     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bd6dc_kubernetes-dashboard(6166aa42-3f63-4436-b48c-c2a876ef76a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1"
	Dec 09 02:37:17 old-k8s-version-126117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:37:17 old-k8s-version-126117 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:37:17 old-k8s-version-126117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 02:37:17 old-k8s-version-126117 systemd[1]: kubelet.service: Consumed 1.477s CPU time.
	
	
	==> kubernetes-dashboard [90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13] <==
	2025/12/09 02:36:48 Using namespace: kubernetes-dashboard
	2025/12/09 02:36:48 Using in-cluster config to connect to apiserver
	2025/12/09 02:36:48 Using secret token for csrf signing
	2025/12/09 02:36:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/09 02:36:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/09 02:36:48 Successful initial request to the apiserver, version: v1.28.0
	2025/12/09 02:36:48 Generating JWE encryption key
	2025/12/09 02:36:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/09 02:36:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/09 02:36:48 Initializing JWE encryption key from synchronized object
	2025/12/09 02:36:48 Creating in-cluster Sidecar client
	2025/12/09 02:36:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:36:48 Serving insecurely on HTTP port: 9090
	2025/12/09 02:37:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:36:48 Starting overwatch
	
	
	==> storage-provisioner [c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595] <==
	I1209 02:36:59.588332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:36:59.595720       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:36:59.595771       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 02:37:16.992980       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:37:16.993048       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85c09fe8-de97-42ff-bfa4-d07a489e759c", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-126117_03653a7b-6d22-404a-9679-7cc06b1ea5df became leader
	I1209 02:37:16.993114       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-126117_03653a7b-6d22-404a-9679-7cc06b1ea5df!
	I1209 02:37:17.093492       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-126117_03653a7b-6d22-404a-9679-7cc06b1ea5df!
	
	
	==> storage-provisioner [c6b69e396ad3f3e4bce92baa0b1d59e69e9ad24edc6d95b4c3521edbbe8e9a6c] <==
	I1209 02:36:28.838810       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:36:58.841471       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-126117 -n old-k8s-version-126117
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-126117 -n old-k8s-version-126117: exit status 2 (341.042628ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-126117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-126117
helpers_test.go:243: (dbg) docker inspect old-k8s-version-126117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4",
	        "Created": "2025-12-09T02:35:09.203047327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:36:19.406468752Z",
	            "FinishedAt": "2025-12-09T02:36:18.547079603Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/hostname",
	        "HostsPath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/hosts",
	        "LogPath": "/var/lib/docker/containers/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4/fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4-json.log",
	        "Name": "/old-k8s-version-126117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-126117:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-126117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fdb4a1a346638ee632ba31176330f2544886e9a9ee4794d7761c41dbccab3ad4",
	                "LowerDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/477ee04dabdbfe61908510c141d1d1995f7ba45f679d182301c8c8a9ea786cf5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-126117",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-126117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-126117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-126117",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-126117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "25014d043c5cf19ace2963078af92f0a04a9eaf520664cd5c5dbe3824c991346",
	            "SandboxKey": "/var/run/docker/netns/25014d043c5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-126117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ecc05a83343c9bbe58006fef4c60d0178931361725a834370b23a8555dfe27ce",
	                    "EndpointID": "1cf22d63000af0cb7a5f71be2894d7df67a9bc9d184a63a90b67680bf8b56793",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "2e:ca:d0:39:43:27",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-126117",
	                        "fdb4a1a34663"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-126117 -n old-k8s-version-126117
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-126117 -n old-k8s-version-126117: exit status 2 (361.488769ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-126117 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-126117 logs -n 25: (1.232705903s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p default-k8s-diff-port-512414 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ delete  │ -p cert-expiration-572052                                                                                                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p no-preload-185074 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-126117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-512414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p no-preload-185074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ stop    │ -p newest-cni-828614 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-828614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ image   │ newest-cni-828614 image list --format=json                                                                                                                                                                                                           │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ pause   │ -p newest-cni-828614 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p disable-driver-mounts-894253                                                                                                                                                                                                                      │ disable-driver-mounts-894253 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-485234           │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ old-k8s-version-126117 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p old-k8s-version-126117 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ default-k8s-diff-port-512414 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p default-k8s-diff-port-512414 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:37:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:37:06.265894  312861 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:37:06.266149  312861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:06.266159  312861 out.go:374] Setting ErrFile to fd 2...
	I1209 02:37:06.266163  312861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:06.266390  312861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:37:06.266890  312861 out.go:368] Setting JSON to false
	I1209 02:37:06.268011  312861 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4775,"bootTime":1765243051,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:37:06.268068  312861 start.go:143] virtualization: kvm guest
	I1209 02:37:06.269973  312861 out.go:179] * [embed-certs-485234] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:37:06.271239  312861 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:37:06.271260  312861 notify.go:221] Checking for updates...
	I1209 02:37:06.273331  312861 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:37:06.274481  312861 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:37:06.275572  312861 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:37:06.276773  312861 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:37:06.277728  312861 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:37:06.279204  312861 config.go:182] Loaded profile config "default-k8s-diff-port-512414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:06.279294  312861 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:37:06.279368  312861 config.go:182] Loaded profile config "old-k8s-version-126117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1209 02:37:06.279440  312861 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:37:06.303034  312861 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:37:06.303110  312861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:06.356600  312861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-09 02:37:06.347325006 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:06.356738  312861 docker.go:319] overlay module found
	I1209 02:37:06.359001  312861 out.go:179] * Using the docker driver based on user configuration
	I1209 02:37:06.359972  312861 start.go:309] selected driver: docker
	I1209 02:37:06.359986  312861 start.go:927] validating driver "docker" against <nil>
	I1209 02:37:06.360000  312861 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:37:06.360532  312861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:06.418200  312861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-09 02:37:06.408143545 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:06.418358  312861 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:37:06.418551  312861 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:37:06.419983  312861 out.go:179] * Using Docker driver with root privileges
	I1209 02:37:06.420941  312861 cni.go:84] Creating CNI manager for ""
	I1209 02:37:06.420995  312861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:06.421005  312861 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 02:37:06.421065  312861 start.go:353] cluster config:
	{Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:06.422178  312861 out.go:179] * Starting "embed-certs-485234" primary control-plane node in "embed-certs-485234" cluster
	I1209 02:37:06.423106  312861 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:37:06.424069  312861 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:37:06.424889  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:06.424931  312861 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:37:06.424943  312861 cache.go:65] Caching tarball of preloaded images
	I1209 02:37:06.424980  312861 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:37:06.425038  312861 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:37:06.425052  312861 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:37:06.425142  312861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json ...
	I1209 02:37:06.425166  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json: {Name:mk4ecce42013d99fe1ed5fecfa3a33c0e934834a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:06.444449  312861 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:37:06.444468  312861 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:37:06.444481  312861 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:37:06.444504  312861 start.go:360] acquireMachinesLock for embed-certs-485234: {Name:mk9b23f5c442a469a62d61ac899836b50beae7f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:37:06.444597  312861 start.go:364] duration metric: took 74.067µs to acquireMachinesLock for "embed-certs-485234"
	I1209 02:37:06.444619  312861 start.go:93] Provisioning new machine with config: &{Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:37:06.444720  312861 start.go:125] createHost starting for "" (driver="docker")
	W1209 02:37:02.634996  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:37:05.135565  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:37:05.746125  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:37:08.245123  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:37:07.633907  300341 pod_ready.go:94] pod "coredns-66bc5c9577-gtkkc" is "Ready"
	I1209 02:37:07.633932  300341 pod_ready.go:86] duration metric: took 34.504712821s for pod "coredns-66bc5c9577-gtkkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.636195  300341 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.639858  300341 pod_ready.go:94] pod "etcd-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.639883  300341 pod_ready.go:86] duration metric: took 3.667895ms for pod "etcd-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.641854  300341 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.645251  300341 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.645272  300341 pod_ready.go:86] duration metric: took 3.400654ms for pod "kube-apiserver-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.647046  300341 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.832888  300341 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.832916  300341 pod_ready.go:86] duration metric: took 185.849084ms for pod "kube-controller-manager-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.033001  300341 pod_ready.go:83] waiting for pod "kube-proxy-nkdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.433254  300341 pod_ready.go:94] pod "kube-proxy-nkdhm" is "Ready"
	I1209 02:37:08.433283  300341 pod_ready.go:86] duration metric: took 400.256248ms for pod "kube-proxy-nkdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.632462  300341 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:09.032519  300341 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:09.032544  300341 pod_ready.go:86] duration metric: took 400.052955ms for pod "kube-scheduler-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:09.032557  300341 pod_ready.go:40] duration metric: took 35.906617096s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:37:09.076201  300341 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 02:37:09.153412  300341 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-512414" cluster and "default" namespace by default
	I1209 02:37:06.446141  312861 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:37:06.446346  312861 start.go:159] libmachine.API.Create for "embed-certs-485234" (driver="docker")
	I1209 02:37:06.446376  312861 client.go:173] LocalClient.Create starting
	I1209 02:37:06.446433  312861 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:37:06.446463  312861 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:06.446481  312861 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:06.446530  312861 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:37:06.446551  312861 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:06.446560  312861 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:06.446913  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:37:06.462783  312861 cli_runner.go:211] docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:37:06.462837  312861 network_create.go:284] running [docker network inspect embed-certs-485234] to gather additional debugging logs...
	I1209 02:37:06.462851  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234
	W1209 02:37:06.477787  312861 cli_runner.go:211] docker network inspect embed-certs-485234 returned with exit code 1
	I1209 02:37:06.477816  312861 network_create.go:287] error running [docker network inspect embed-certs-485234]: docker network inspect embed-certs-485234: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-485234 not found
	I1209 02:37:06.477839  312861 network_create.go:289] output of [docker network inspect embed-certs-485234]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-485234 not found
	
	** /stderr **
	I1209 02:37:06.477923  312861 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:06.494719  312861 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:37:06.495379  312861 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:37:06.496115  312861 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:37:06.496652  312861 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e16439d105c6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:ee:5c:7c:6c:62} reservation:<nil>}
	I1209 02:37:06.497265  312861 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ecc05a83343c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:d2:77:3b:89:79} reservation:<nil>}
	I1209 02:37:06.498119  312861 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb0c90}
	I1209 02:37:06.498145  312861 network_create.go:124] attempt to create docker network embed-certs-485234 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1209 02:37:06.498186  312861 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-485234 embed-certs-485234
	I1209 02:37:06.545208  312861 network_create.go:108] docker network embed-certs-485234 192.168.94.0/24 created
	I1209 02:37:06.545234  312861 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-485234" container
	I1209 02:37:06.545311  312861 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:37:06.562656  312861 cli_runner.go:164] Run: docker volume create embed-certs-485234 --label name.minikube.sigs.k8s.io=embed-certs-485234 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:37:06.579351  312861 oci.go:103] Successfully created a docker volume embed-certs-485234
	I1209 02:37:06.579429  312861 cli_runner.go:164] Run: docker run --rm --name embed-certs-485234-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-485234 --entrypoint /usr/bin/test -v embed-certs-485234:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:37:06.968560  312861 oci.go:107] Successfully prepared a docker volume embed-certs-485234
	I1209 02:37:06.968678  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:06.968693  312861 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 02:37:06.968796  312861 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-485234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 02:37:10.828650  312861 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-485234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.859783742s)
	I1209 02:37:10.828684  312861 kic.go:203] duration metric: took 3.859986647s to extract preloaded images to volume ...
	W1209 02:37:10.828767  312861 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:37:10.828801  312861 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:37:10.828839  312861 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:37:10.885101  312861 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-485234 --name embed-certs-485234 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-485234 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-485234 --network embed-certs-485234 --ip 192.168.94.2 --volume embed-certs-485234:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 02:37:11.162572  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Running}}
	I1209 02:37:11.182739  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.201533  312861 cli_runner.go:164] Run: docker exec embed-certs-485234 stat /var/lib/dpkg/alternatives/iptables
	I1209 02:37:11.245603  312861 oci.go:144] the created container "embed-certs-485234" has a running status.
	I1209 02:37:11.245680  312861 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa...
	W1209 02:37:10.267075  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:37:12.746430  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:37:13.247465  302799 pod_ready.go:94] pod "coredns-7d764666f9-m6tbs" is "Ready"
	I1209 02:37:13.247521  302799 pod_ready.go:86] duration metric: took 34.507076064s for pod "coredns-7d764666f9-m6tbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.252380  302799 pod_ready.go:83] waiting for pod "etcd-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.257623  302799 pod_ready.go:94] pod "etcd-no-preload-185074" is "Ready"
	I1209 02:37:13.257682  302799 pod_ready.go:86] duration metric: took 5.27485ms for pod "etcd-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.259429  302799 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.263091  302799 pod_ready.go:94] pod "kube-apiserver-no-preload-185074" is "Ready"
	I1209 02:37:13.263117  302799 pod_ready.go:86] duration metric: took 3.670015ms for pod "kube-apiserver-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.264813  302799 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:11.537220  312861 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 02:37:11.563323  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.583790  312861 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 02:37:11.583816  312861 kic_runner.go:114] Args: [docker exec --privileged embed-certs-485234 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 02:37:11.626606  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.645123  312861 machine.go:94] provisionDockerMachine start ...
	I1209 02:37:11.645212  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.664460  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.664789  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.664805  312861 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:37:11.795359  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-485234
	
	I1209 02:37:11.795387  312861 ubuntu.go:182] provisioning hostname "embed-certs-485234"
	I1209 02:37:11.795448  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.814229  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.814492  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.814514  312861 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-485234 && echo "embed-certs-485234" | sudo tee /etc/hostname
	I1209 02:37:11.948171  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-485234
	
	I1209 02:37:11.948244  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.966144  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.966365  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.966384  312861 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-485234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-485234/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-485234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:37:12.090842  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:37:12.090872  312861 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:37:12.090923  312861 ubuntu.go:190] setting up certificates
	I1209 02:37:12.090933  312861 provision.go:84] configureAuth start
	I1209 02:37:12.090984  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.108441  312861 provision.go:143] copyHostCerts
	I1209 02:37:12.108498  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:37:12.108513  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:37:12.108581  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:37:12.108718  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:37:12.108731  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:37:12.108780  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:37:12.108915  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:37:12.108926  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:37:12.108962  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:37:12.109046  312861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.embed-certs-485234 san=[127.0.0.1 192.168.94.2 embed-certs-485234 localhost minikube]
	I1209 02:37:12.185770  312861 provision.go:177] copyRemoteCerts
	I1209 02:37:12.185823  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:37:12.185867  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.203781  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.297266  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:37:12.315682  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 02:37:12.332372  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 02:37:12.348767  312861 provision.go:87] duration metric: took 257.824432ms to configureAuth
	I1209 02:37:12.348791  312861 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:37:12.348966  312861 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:12.349051  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.367892  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:12.368130  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:12.368152  312861 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:37:12.631127  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:37:12.631150  312861 machine.go:97] duration metric: took 986.000884ms to provisionDockerMachine
	I1209 02:37:12.631160  312861 client.go:176] duration metric: took 6.184776828s to LocalClient.Create
	I1209 02:37:12.631178  312861 start.go:167] duration metric: took 6.184833791s to libmachine.API.Create "embed-certs-485234"
	I1209 02:37:12.631185  312861 start.go:293] postStartSetup for "embed-certs-485234" (driver="docker")
	I1209 02:37:12.631193  312861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:37:12.631247  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:37:12.631288  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.650047  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.745621  312861 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:37:12.749630  312861 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:37:12.749691  312861 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:37:12.749704  312861 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:37:12.749756  312861 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:37:12.749822  312861 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:37:12.749906  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:37:12.758040  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:12.779782  312861 start.go:296] duration metric: took 148.5859ms for postStartSetup
	I1209 02:37:12.780088  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.798780  312861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json ...
	I1209 02:37:12.799048  312861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:37:12.799087  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.816209  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.906142  312861 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:37:12.910519  312861 start.go:128] duration metric: took 6.465788374s to createHost
	I1209 02:37:12.910538  312861 start.go:83] releasing machines lock for "embed-certs-485234", held for 6.465929672s
	I1209 02:37:12.910606  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.928304  312861 ssh_runner.go:195] Run: cat /version.json
	I1209 02:37:12.928356  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.928375  312861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:37:12.928447  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.946358  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.946972  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:13.091177  312861 ssh_runner.go:195] Run: systemctl --version
	I1209 02:37:13.097600  312861 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:37:13.131258  312861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:37:13.135743  312861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:37:13.135810  312861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:37:13.162689  312861 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 02:37:13.162715  312861 start.go:496] detecting cgroup driver to use...
	I1209 02:37:13.162750  312861 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:37:13.162798  312861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:37:13.178717  312861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:37:13.190805  312861 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:37:13.190853  312861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:37:13.206264  312861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:37:13.222864  312861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:37:13.305814  312861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:37:13.390556  312861 docker.go:234] disabling docker service ...
	I1209 02:37:13.390674  312861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:37:13.409495  312861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:37:13.422267  312861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:37:13.506320  312861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:37:13.589113  312861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:37:13.600697  312861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:37:13.614485  312861 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:37:13.614532  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.624541  312861 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:37:13.624587  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.633049  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.641219  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.650011  312861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:37:13.657733  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.665900  312861 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.678728  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.686933  312861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:37:13.693823  312861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:37:13.700444  312861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:13.779960  312861 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:37:13.910038  312861 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:37:13.910103  312861 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:37:13.914205  312861 start.go:564] Will wait 60s for crictl version
	I1209 02:37:13.914265  312861 ssh_runner.go:195] Run: which crictl
	I1209 02:37:13.917709  312861 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:37:13.941238  312861 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:37:13.941311  312861 ssh_runner.go:195] Run: crio --version
	I1209 02:37:13.969399  312861 ssh_runner.go:195] Run: crio --version
	I1209 02:37:13.997525  312861 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 02:37:13.444584  302799 pod_ready.go:94] pod "kube-controller-manager-no-preload-185074" is "Ready"
	I1209 02:37:13.444613  302799 pod_ready.go:86] duration metric: took 179.781521ms for pod "kube-controller-manager-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.644581  302799 pod_ready.go:83] waiting for pod "kube-proxy-8jh88" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.044726  302799 pod_ready.go:94] pod "kube-proxy-8jh88" is "Ready"
	I1209 02:37:14.044754  302799 pod_ready.go:86] duration metric: took 400.15086ms for pod "kube-proxy-8jh88" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.243839  302799 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.644301  302799 pod_ready.go:94] pod "kube-scheduler-no-preload-185074" is "Ready"
	I1209 02:37:14.644322  302799 pod_ready.go:86] duration metric: took 400.457904ms for pod "kube-scheduler-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.644333  302799 pod_ready.go:40] duration metric: took 35.907468936s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:37:14.691366  302799 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1209 02:37:14.693696  302799 out.go:179] * Done! kubectl is now configured to use "no-preload-185074" cluster and "default" namespace by default
	I1209 02:37:13.998454  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:14.015735  312861 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1209 02:37:14.019587  312861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:14.029452  312861 kubeadm.go:884] updating cluster {Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:37:14.029561  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:14.029613  312861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:14.062629  312861 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:14.062664  312861 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:37:14.062704  312861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:14.087930  312861 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:14.087950  312861 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:37:14.087958  312861 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1209 02:37:14.088051  312861 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-485234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:37:14.088114  312861 ssh_runner.go:195] Run: crio config
	I1209 02:37:14.133509  312861 cni.go:84] Creating CNI manager for ""
	I1209 02:37:14.133535  312861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:14.133556  312861 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:37:14.133578  312861 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-485234 NodeName:embed-certs-485234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:37:14.133735  312861 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-485234"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:37:14.133794  312861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 02:37:14.141697  312861 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:37:14.141757  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:37:14.149416  312861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1209 02:37:14.162206  312861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:37:14.177373  312861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1209 02:37:14.189424  312861 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:37:14.192881  312861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:14.201952  312861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:14.282853  312861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:37:14.304730  312861 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234 for IP: 192.168.94.2
	I1209 02:37:14.304752  312861 certs.go:195] generating shared ca certs ...
	I1209 02:37:14.304774  312861 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.304940  312861 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:37:14.305016  312861 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:37:14.305033  312861 certs.go:257] generating profile certs ...
	I1209 02:37:14.305100  312861 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key
	I1209 02:37:14.305120  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt with IP's: []
	I1209 02:37:14.359436  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt ...
	I1209 02:37:14.359461  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt: {Name:mkd2687220e2c1a496f0919e5b4ee3ae985b0d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.359653  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key ...
	I1209 02:37:14.359668  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key: {Name:mk9eda0520f2cbbe6316507c37cd6f28fc511268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.359822  312861 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20
	I1209 02:37:14.359847  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1209 02:37:14.444770  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 ...
	I1209 02:37:14.444793  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20: {Name:mk94bd2fac7c7e957c0ee327319c5c1e8a6301f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.444968  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20 ...
	I1209 02:37:14.444991  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20: {Name:mkacd03a1ebe1fb35635f22c6c191b2975875de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.445113  312861 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt
	I1209 02:37:14.445190  312861 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key
	I1209 02:37:14.445244  312861 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key
	I1209 02:37:14.445259  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt with IP's: []
	I1209 02:37:14.560806  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt ...
	I1209 02:37:14.560826  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt: {Name:mke7ad5eda062e0b1092e0004408a09aa647aeea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.560983  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key ...
	I1209 02:37:14.561002  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key: {Name:mk93c4daac2f0f9d1f8c2f6e132f0bae11b524ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.561200  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:37:14.561241  312861 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:37:14.561252  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:37:14.561274  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:37:14.561307  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:37:14.561340  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:37:14.561405  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:14.561980  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:37:14.580295  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:37:14.597083  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:37:14.613685  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:37:14.630255  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 02:37:14.648077  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:37:14.666598  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:37:14.683845  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:37:14.701559  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:37:14.724314  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:37:14.741496  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:37:14.760427  312861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:37:14.773786  312861 ssh_runner.go:195] Run: openssl version
	I1209 02:37:14.779710  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.787281  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:37:14.795901  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.799927  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.799992  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.839135  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:14.847352  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145522.pem /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:14.854769  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.861800  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:37:14.869148  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.872807  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.872857  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.906788  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:37:14.913728  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 02:37:14.920733  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.928244  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:37:14.935526  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.939120  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.939164  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.983518  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:37:14.991697  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/51391683.0
	I1209 02:37:15.000864  312861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:37:15.005011  312861 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 02:37:15.005053  312861 kubeadm.go:401] StartCluster: {Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:15.005116  312861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:37:15.005173  312861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:37:15.035472  312861 cri.go:89] found id: ""
	I1209 02:37:15.035518  312861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:37:15.045322  312861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 02:37:15.053145  312861 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 02:37:15.053203  312861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 02:37:15.061178  312861 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 02:37:15.061197  312861 kubeadm.go:158] found existing configuration files:
	
	I1209 02:37:15.061235  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 02:37:15.068770  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 02:37:15.068824  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 02:37:15.075842  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 02:37:15.083627  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 02:37:15.083711  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 02:37:15.091022  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 02:37:15.098306  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 02:37:15.098366  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 02:37:15.105103  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 02:37:15.112368  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 02:37:15.112418  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 02:37:15.119369  312861 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 02:37:15.155406  312861 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 02:37:15.155454  312861 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 02:37:15.189920  312861 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 02:37:15.190010  312861 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 02:37:15.190083  312861 kubeadm.go:319] OS: Linux
	I1209 02:37:15.190144  312861 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 02:37:15.190210  312861 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 02:37:15.190296  312861 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 02:37:15.190379  312861 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 02:37:15.190454  312861 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 02:37:15.190527  312861 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 02:37:15.190604  312861 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 02:37:15.190702  312861 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 02:37:15.249252  312861 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 02:37:15.249405  312861 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 02:37:15.249583  312861 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 02:37:15.256114  312861 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 02:37:15.259205  312861 out.go:252]   - Generating certificates and keys ...
	I1209 02:37:15.259301  312861 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 02:37:15.259380  312861 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 02:37:15.555393  312861 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 02:37:15.791444  312861 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 02:37:16.204198  312861 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 02:37:16.347360  312861 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 02:37:16.874857  312861 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 02:37:16.875048  312861 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-485234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1209 02:37:17.314689  312861 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 02:37:17.314865  312861 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-485234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1209 02:37:17.499551  312861 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 02:37:17.696286  312861 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 02:37:17.984705  312861 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 02:37:17.984811  312861 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 02:37:18.173479  312861 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 02:37:18.852948  312861 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 02:37:19.295701  312861 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 02:37:19.424695  312861 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 02:37:19.612418  312861 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 02:37:19.613112  312861 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 02:37:19.616719  312861 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 02:37:19.618181  312861 out.go:252]   - Booting up control plane ...
	I1209 02:37:19.618275  312861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 02:37:19.618393  312861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 02:37:19.619018  312861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 02:37:19.649026  312861 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 02:37:19.649149  312861 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 02:37:19.657257  312861 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 02:37:19.657507  312861 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 02:37:19.657567  312861 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 02:37:19.759620  312861 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 02:37:19.759784  312861 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 02:37:20.761316  312861 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001746054s
	I1209 02:37:20.765776  312861 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 02:37:20.765912  312861 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1209 02:37:20.766025  312861 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 02:37:20.766123  312861 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Dec 09 02:36:48 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:48.481115522Z" level=info msg="Created container 90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rc6b/kubernetes-dashboard" id=6bb81f12-8346-406a-925d-83edb2a52e2b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:48 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:48.481955644Z" level=info msg="Starting container: 90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13" id=6122ed9d-3022-4652-ba72-bdfea2e81c86 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:48 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:48.484136916Z" level=info msg="Started container" PID=1714 containerID=90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rc6b/kubernetes-dashboard id=6122ed9d-3022-4652-ba72-bdfea2e81c86 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f58e3c0b0419a0c5bdd47f5f6f05d518d2d1e78ac2f7a1472e59956186d9b8fa
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.538695066Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e16b0347-8429-4c68-bf68-cf9b11d217df name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.539560735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=505470f5-f79d-4c4d-9b81-358eb1cb1456 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.540548341Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4c9c4c22-b258-40d2-9b6e-ea76f9789b06 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.540726958Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.544687548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.544868938Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0e802140fa546f8886723083ca856e743a4267ac19f5456d7a9cf438f3365f3e/merged/etc/passwd: no such file or directory"
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.544897321Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0e802140fa546f8886723083ca856e743a4267ac19f5456d7a9cf438f3365f3e/merged/etc/group: no such file or directory"
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.545183705Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.573707275Z" level=info msg="Created container c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595: kube-system/storage-provisioner/storage-provisioner" id=4c9c4c22-b258-40d2-9b6e-ea76f9789b06 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.574211442Z" level=info msg="Starting container: c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595" id=1ae2d797-6a19-488a-b488-110bfd7f8c42 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:36:59 old-k8s-version-126117 crio[568]: time="2025-12-09T02:36:59.575966757Z" level=info msg="Started container" PID=1738 containerID=c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595 description=kube-system/storage-provisioner/storage-provisioner id=1ae2d797-6a19-488a-b488-110bfd7f8c42 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c5b228a1c60fc802d3d4b51f123de42c340fce43b13fb117bd454ba38c1b9184
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.419528482Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ff4da2d3-4e41-4818-94de-89f2d5d98b6b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.420573571Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=42fe3f74-90f0-49c0-8a43-5549b52956ad name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.421574766Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc/dashboard-metrics-scraper" id=2768de4e-64a2-41ba-80c5-65d55e74c2a6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.421722781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.427934355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.428667694Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.466393218Z" level=info msg="Created container 37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc/dashboard-metrics-scraper" id=2768de4e-64a2-41ba-80c5-65d55e74c2a6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.467002708Z" level=info msg="Starting container: 37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7" id=d9348085-2e45-4959-9e3f-1a5be2fa3bbb name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.469204383Z" level=info msg="Started container" PID=1753 containerID=37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc/dashboard-metrics-scraper id=d9348085-2e45-4959-9e3f-1a5be2fa3bbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=b3d1d3361e84b6f2399e8b45d89e6641877b26ac1ff079199e4d2654e1b3e2e8
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.552401108Z" level=info msg="Removing container: 26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283" id=d62127d3-ed65-403e-90cc-35d6e4415b67 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:02 old-k8s-version-126117 crio[568]: time="2025-12-09T02:37:02.563215153Z" level=info msg="Removed container 26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc/dashboard-metrics-scraper" id=d62127d3-ed65-403e-90cc-35d6e4415b67 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	37c28b22b7b48       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   b3d1d3361e84b       dashboard-metrics-scraper-5f989dc9cf-bd6dc       kubernetes-dashboard
	c3513e6f3e957       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   c5b228a1c60fc       storage-provisioner                              kube-system
	90f9e969d62ef       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   f58e3c0b0419a       kubernetes-dashboard-8694d4445c-5rc6b            kubernetes-dashboard
	9cdb1dfdcfea4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   85f51789d3825       coredns-5dd5756b68-5d9gm                         kube-system
	097c72a753fd3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   55f5c04accafb       busybox                                          default
	079fae7ab6686       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   074bc5aa1371c       kube-proxy-xjvf6                                 kube-system
	c6b69e396ad3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   c5b228a1c60fc       storage-provisioner                              kube-system
	22e7685929bf9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   c78a1761c7bcd       kindnet-xk6zs                                    kube-system
	cd4f4b4fa3c59       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   b7edf220de050       kube-apiserver-old-k8s-version-126117            kube-system
	7b6946b6f60bb       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   1e841a76c71c7       kube-controller-manager-old-k8s-version-126117   kube-system
	5c61431ded035       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   f31dfa5f7ddba       etcd-old-k8s-version-126117                      kube-system
	a014d20dec589       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   a4c33924dc17f       kube-scheduler-old-k8s-version-126117            kube-system
	
	
	==> coredns [9cdb1dfdcfea40662105cf8fff8b3a41bfd59bed30fdb07bca9e68b99d1b7c53] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57248 - 31912 "HINFO IN 6151329663776760756.4907000566883293878. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.447825139s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-126117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-126117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=old-k8s-version-126117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:35:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-126117
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:37:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:36:58 +0000   Tue, 09 Dec 2025 02:35:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:36:58 +0000   Tue, 09 Dec 2025 02:35:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:36:58 +0000   Tue, 09 Dec 2025 02:35:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:36:58 +0000   Tue, 09 Dec 2025 02:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-126117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                fe5af2e7-907b-43f5-907c-9c3129342d44
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-5d9gm                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-old-k8s-version-126117                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-xk6zs                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-126117             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-126117    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-xjvf6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-126117             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-bd6dc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5rc6b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node old-k8s-version-126117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node old-k8s-version-126117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node old-k8s-version-126117 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node old-k8s-version-126117 event: Registered Node old-k8s-version-126117 in Controller
	  Normal  NodeReady                92s                kubelet          Node old-k8s-version-126117 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-126117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-126117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-126117 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node old-k8s-version-126117 event: Registered Node old-k8s-version-126117 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [5c61431ded03512f0b0b99ea3e143673f0cbf0844745ab6308ce619d683d312a] <==
	{"level":"info","ts":"2025-12-09T02:36:25.981047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-09T02:36:25.981133Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-09T02:36:25.981285Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-09T02:36:25.981325Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-09T02:36:25.984263Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-09T02:36:25.984454Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-09T02:36:25.984475Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-09T02:36:25.984563Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-09T02:36:25.984568Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-09T02:36:26.971294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-09T02:36:26.971345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-09T02:36:26.971393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-09T02:36:26.971413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-09T02:36:26.971421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-09T02:36:26.971433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-09T02:36:26.971444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-09T02:36:26.972285Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-126117 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-09T02:36:26.972296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-09T02:36:26.972329Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-09T02:36:26.972537Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-09T02:36:26.972625Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-09T02:36:26.973871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-09T02:36:26.973932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-09T02:36:45.618448Z","caller":"traceutil/trace.go:171","msg":"trace[517070542] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"106.017576ms","start":"2025-12-09T02:36:45.512407Z","end":"2025-12-09T02:36:45.618425Z","steps":["trace[517070542] 'process raft request'  (duration: 105.489953ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:36:46.643499Z","caller":"traceutil/trace.go:171","msg":"trace[1318274902] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"133.047374ms","start":"2025-12-09T02:36:46.510425Z","end":"2025-12-09T02:36:46.643473Z","steps":["trace[1318274902] 'process raft request'  (duration: 108.676105ms)","trace[1318274902] 'compare'  (duration: 24.216513ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:37:22 up  1:19,  0 user,  load average: 3.00, 2.56, 1.87
	Linux old-k8s-version-126117 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [22e7685929bf9235ea63b9e6dde43b2c40fd4f6c5864ffcc5f2d959a3e4469d6] <==
	I1209 02:36:29.031413       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:36:29.031876       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1209 02:36:29.032105       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:36:29.032129       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:36:29.032153       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:36:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:36:29.319947       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:36:29.327474       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:36:29.327497       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:36:29.328675       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:36:29.827738       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:36:29.827781       1 metrics.go:72] Registering metrics
	I1209 02:36:29.827850       1 controller.go:711] "Syncing nftables rules"
	I1209 02:36:39.320343       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:36:39.320391       1 main.go:301] handling current node
	I1209 02:36:49.320677       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:36:49.320733       1 main.go:301] handling current node
	I1209 02:36:59.319943       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:36:59.319981       1 main.go:301] handling current node
	I1209 02:37:09.320917       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:37:09.320950       1 main.go:301] handling current node
	I1209 02:37:19.320984       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 02:37:19.321025       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cd4f4b4fa3c59604fdb18dba3e4b3b8128da007c85eec89809b8c53268ac76cd] <==
	I1209 02:36:28.030569       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1209 02:36:28.120523       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1209 02:36:28.120916       1 shared_informer.go:318] Caches are synced for configmaps
	I1209 02:36:28.121134       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1209 02:36:28.122110       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1209 02:36:28.122133       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1209 02:36:28.120536       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1209 02:36:28.141784       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1209 02:36:28.141923       1 aggregator.go:166] initial CRD sync complete...
	I1209 02:36:28.141956       1 autoregister_controller.go:141] Starting autoregister controller
	I1209 02:36:28.142096       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 02:36:28.142130       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:36:28.154240       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1209 02:36:28.173239       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:36:29.023205       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:36:29.343409       1 controller.go:624] quota admission added evaluator for: namespaces
	I1209 02:36:29.390379       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1209 02:36:29.423521       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:36:29.433602       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:36:29.444948       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1209 02:36:29.497603       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.183.225"}
	I1209 02:36:29.510997       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.12.224"}
	I1209 02:36:40.802658       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1209 02:36:40.822707       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:36:40.823021       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7b6946b6f60bbbe8e9236ae337e00d48c56ddf19606d6f3a3492f3af5958f720] <==
	I1209 02:36:40.841046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="31.783032ms"
	I1209 02:36:40.841207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.964043ms"
	I1209 02:36:40.845214       1 shared_informer.go:318] Caches are synced for PVC protection
	I1209 02:36:40.851297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.021945ms"
	I1209 02:36:40.851486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.284µs"
	I1209 02:36:40.854018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.684855ms"
	I1209 02:36:40.854112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.725µs"
	I1209 02:36:40.857924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="82.678µs"
	I1209 02:36:40.867161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.906µs"
	I1209 02:36:40.894896       1 shared_informer.go:318] Caches are synced for cronjob
	I1209 02:36:40.902378       1 shared_informer.go:318] Caches are synced for resource quota
	I1209 02:36:40.914226       1 shared_informer.go:318] Caches are synced for resource quota
	I1209 02:36:40.959660       1 shared_informer.go:318] Caches are synced for persistent volume
	I1209 02:36:41.327176       1 shared_informer.go:318] Caches are synced for garbage collector
	I1209 02:36:41.358525       1 shared_informer.go:318] Caches are synced for garbage collector
	I1209 02:36:41.358561       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1209 02:36:44.512134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.821µs"
	I1209 02:36:45.619998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.887µs"
	I1209 02:36:46.645283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.069µs"
	I1209 02:36:48.528232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.410039ms"
	I1209 02:36:48.528321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.019µs"
	I1209 02:37:02.565998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.775µs"
	I1209 02:37:03.107696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.615886ms"
	I1209 02:37:03.107803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.541µs"
	I1209 02:37:11.153226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.044µs"
	
	
	==> kube-proxy [079fae7ab668695a5dc40dc342004525589e751567722848987ee9bdb98ffaa5] <==
	I1209 02:36:28.887976       1 server_others.go:69] "Using iptables proxy"
	I1209 02:36:28.903022       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1209 02:36:28.924519       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:36:28.927792       1 server_others.go:152] "Using iptables Proxier"
	I1209 02:36:28.927844       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1209 02:36:28.927871       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1209 02:36:28.927908       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1209 02:36:28.930755       1 server.go:846] "Version info" version="v1.28.0"
	I1209 02:36:28.930905       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:28.931797       1 config.go:188] "Starting service config controller"
	I1209 02:36:28.933253       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1209 02:36:28.932404       1 config.go:97] "Starting endpoint slice config controller"
	I1209 02:36:28.933369       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1209 02:36:28.932787       1 config.go:315] "Starting node config controller"
	I1209 02:36:28.933438       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1209 02:36:29.033532       1 shared_informer.go:318] Caches are synced for node config
	I1209 02:36:29.033669       1 shared_informer.go:318] Caches are synced for service config
	I1209 02:36:29.033709       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a014d20dec589e1a973232c78daa628725af3a4e25a5ddd1fd633019a0917ac7] <==
	I1209 02:36:26.387945       1 serving.go:348] Generated self-signed cert in-memory
	W1209 02:36:28.065528       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:36:28.065567       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:36:28.065585       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:36:28.065599       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:36:28.094707       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1209 02:36:28.094736       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:28.096239       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:28.096792       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 02:36:28.097162       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1209 02:36:28.097284       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1209 02:36:28.198217       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 02:36:40 old-k8s-version-126117 kubelet[728]: I1209 02:36:40.840508     728 topology_manager.go:215] "Topology Admit Handler" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-bd6dc"
	Dec 09 02:36:40 old-k8s-version-126117 kubelet[728]: I1209 02:36:40.955330     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4c6cd675-cc90-4ada-a2b0-7f4c03ef7b3a-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-5rc6b\" (UID: \"4c6cd675-cc90-4ada-a2b0-7f4c03ef7b3a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rc6b"
	Dec 09 02:36:40 old-k8s-version-126117 kubelet[728]: I1209 02:36:40.955606     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6166aa42-3f63-4436-b48c-c2a876ef76a1-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-bd6dc\" (UID: \"6166aa42-3f63-4436-b48c-c2a876ef76a1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc"
	Dec 09 02:36:40 old-k8s-version-126117 kubelet[728]: I1209 02:36:40.955693     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzkgb\" (UniqueName: \"kubernetes.io/projected/6166aa42-3f63-4436-b48c-c2a876ef76a1-kube-api-access-rzkgb\") pod \"dashboard-metrics-scraper-5f989dc9cf-bd6dc\" (UID: \"6166aa42-3f63-4436-b48c-c2a876ef76a1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc"
	Dec 09 02:36:40 old-k8s-version-126117 kubelet[728]: I1209 02:36:40.955806     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98gzc\" (UniqueName: \"kubernetes.io/projected/4c6cd675-cc90-4ada-a2b0-7f4c03ef7b3a-kube-api-access-98gzc\") pod \"kubernetes-dashboard-8694d4445c-5rc6b\" (UID: \"4c6cd675-cc90-4ada-a2b0-7f4c03ef7b3a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rc6b"
	Dec 09 02:36:44 old-k8s-version-126117 kubelet[728]: I1209 02:36:44.494157     728 scope.go:117] "RemoveContainer" containerID="dea6fd16bceb91616c8ca5c9398b5abfea11227ab50af30d98cea266a3878316"
	Dec 09 02:36:45 old-k8s-version-126117 kubelet[728]: I1209 02:36:45.499021     728 scope.go:117] "RemoveContainer" containerID="dea6fd16bceb91616c8ca5c9398b5abfea11227ab50af30d98cea266a3878316"
	Dec 09 02:36:45 old-k8s-version-126117 kubelet[728]: I1209 02:36:45.499499     728 scope.go:117] "RemoveContainer" containerID="26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283"
	Dec 09 02:36:45 old-k8s-version-126117 kubelet[728]: E1209 02:36:45.500127     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bd6dc_kubernetes-dashboard(6166aa42-3f63-4436-b48c-c2a876ef76a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1"
	Dec 09 02:36:46 old-k8s-version-126117 kubelet[728]: I1209 02:36:46.502627     728 scope.go:117] "RemoveContainer" containerID="26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283"
	Dec 09 02:36:46 old-k8s-version-126117 kubelet[728]: E1209 02:36:46.503089     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bd6dc_kubernetes-dashboard(6166aa42-3f63-4436-b48c-c2a876ef76a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1"
	Dec 09 02:36:48 old-k8s-version-126117 kubelet[728]: I1209 02:36:48.520873     728 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rc6b" podStartSLOduration=1.251732233 podCreationTimestamp="2025-12-09 02:36:40 +0000 UTC" firstStartedPulling="2025-12-09 02:36:41.167508085 +0000 UTC m=+15.833666288" lastFinishedPulling="2025-12-09 02:36:48.436580502 +0000 UTC m=+23.102738714" observedRunningTime="2025-12-09 02:36:48.520571545 +0000 UTC m=+23.186729764" watchObservedRunningTime="2025-12-09 02:36:48.520804659 +0000 UTC m=+23.186962878"
	Dec 09 02:36:51 old-k8s-version-126117 kubelet[728]: I1209 02:36:51.142810     728 scope.go:117] "RemoveContainer" containerID="26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283"
	Dec 09 02:36:51 old-k8s-version-126117 kubelet[728]: E1209 02:36:51.143072     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bd6dc_kubernetes-dashboard(6166aa42-3f63-4436-b48c-c2a876ef76a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1"
	Dec 09 02:36:59 old-k8s-version-126117 kubelet[728]: I1209 02:36:59.538309     728 scope.go:117] "RemoveContainer" containerID="c6b69e396ad3f3e4bce92baa0b1d59e69e9ad24edc6d95b4c3521edbbe8e9a6c"
	Dec 09 02:37:02 old-k8s-version-126117 kubelet[728]: I1209 02:37:02.418971     728 scope.go:117] "RemoveContainer" containerID="26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283"
	Dec 09 02:37:02 old-k8s-version-126117 kubelet[728]: I1209 02:37:02.551145     728 scope.go:117] "RemoveContainer" containerID="26b2fef6984716fae582b76d350e2c4dc5d5ddab95ee56e706a1e87760415283"
	Dec 09 02:37:02 old-k8s-version-126117 kubelet[728]: I1209 02:37:02.551367     728 scope.go:117] "RemoveContainer" containerID="37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7"
	Dec 09 02:37:02 old-k8s-version-126117 kubelet[728]: E1209 02:37:02.551783     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bd6dc_kubernetes-dashboard(6166aa42-3f63-4436-b48c-c2a876ef76a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1"
	Dec 09 02:37:11 old-k8s-version-126117 kubelet[728]: I1209 02:37:11.143130     728 scope.go:117] "RemoveContainer" containerID="37c28b22b7b484bf466ba9e7b09d6bbb4e0b4df209e7db053d9e464031655cf7"
	Dec 09 02:37:11 old-k8s-version-126117 kubelet[728]: E1209 02:37:11.143671     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-bd6dc_kubernetes-dashboard(6166aa42-3f63-4436-b48c-c2a876ef76a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-bd6dc" podUID="6166aa42-3f63-4436-b48c-c2a876ef76a1"
	Dec 09 02:37:17 old-k8s-version-126117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:37:17 old-k8s-version-126117 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:37:17 old-k8s-version-126117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 02:37:17 old-k8s-version-126117 systemd[1]: kubelet.service: Consumed 1.477s CPU time.
	
	
	==> kubernetes-dashboard [90f9e969d62efe4c97d9df2db8208becad0b61003f0c2d1257fdc4fed142fa13] <==
	2025/12/09 02:36:48 Using namespace: kubernetes-dashboard
	2025/12/09 02:36:48 Using in-cluster config to connect to apiserver
	2025/12/09 02:36:48 Using secret token for csrf signing
	2025/12/09 02:36:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/09 02:36:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/09 02:36:48 Successful initial request to the apiserver, version: v1.28.0
	2025/12/09 02:36:48 Generating JWE encryption key
	2025/12/09 02:36:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/09 02:36:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/09 02:36:48 Initializing JWE encryption key from synchronized object
	2025/12/09 02:36:48 Creating in-cluster Sidecar client
	2025/12/09 02:36:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:36:48 Serving insecurely on HTTP port: 9090
	2025/12/09 02:37:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:36:48 Starting overwatch
	
	
	==> storage-provisioner [c3513e6f3e9579013369eabf5fafc9d2af5beebbe8c105d9f712cde0169be595] <==
	I1209 02:36:59.588332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:36:59.595720       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:36:59.595771       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 02:37:16.992980       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:37:16.993048       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85c09fe8-de97-42ff-bfa4-d07a489e759c", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-126117_03653a7b-6d22-404a-9679-7cc06b1ea5df became leader
	I1209 02:37:16.993114       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-126117_03653a7b-6d22-404a-9679-7cc06b1ea5df!
	I1209 02:37:17.093492       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-126117_03653a7b-6d22-404a-9679-7cc06b1ea5df!
	
	
	==> storage-provisioner [c6b69e396ad3f3e4bce92baa0b1d59e69e9ad24edc6d95b4c3521edbbe8e9a6c] <==
	I1209 02:36:28.838810       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:36:58.841471       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-126117 -n old-k8s-version-126117
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-126117 -n old-k8s-version-126117: exit status 2 (379.56926ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-126117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-512414 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-512414 --alsologtostderr -v=1: exit status 80 (1.804040107s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-512414 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:37:21.009305  316467 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:37:21.009628  316467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:21.009653  316467 out.go:374] Setting ErrFile to fd 2...
	I1209 02:37:21.009660  316467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:21.009940  316467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:37:21.010196  316467 out.go:368] Setting JSON to false
	I1209 02:37:21.010208  316467 mustload.go:66] Loading cluster: default-k8s-diff-port-512414
	I1209 02:37:21.010552  316467 config.go:182] Loaded profile config "default-k8s-diff-port-512414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:21.011052  316467 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-512414 --format={{.State.Status}}
	I1209 02:37:21.029382  316467 host.go:66] Checking if "default-k8s-diff-port-512414" exists ...
	I1209 02:37:21.029601  316467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:21.095649  316467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-09 02:37:21.081239982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:21.096879  316467 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-512414 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1209 02:37:21.099124  316467 out.go:179] * Pausing node default-k8s-diff-port-512414 ... 
	I1209 02:37:21.100610  316467 host.go:66] Checking if "default-k8s-diff-port-512414" exists ...
	I1209 02:37:21.100998  316467 ssh_runner.go:195] Run: systemctl --version
	I1209 02:37:21.101056  316467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-512414
	I1209 02:37:21.122564  316467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/default-k8s-diff-port-512414/id_rsa Username:docker}
	I1209 02:37:21.227008  316467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:21.243236  316467 pause.go:52] kubelet running: true
	I1209 02:37:21.243307  316467 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:37:21.442553  316467 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:37:21.442695  316467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:21.521569  316467 cri.go:89] found id: "4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26"
	I1209 02:37:21.521597  316467 cri.go:89] found id: "5bcc90f3b2b85b5a813e4a6297bed0ba94510f88322bbc811d37c3b31e147ed6"
	I1209 02:37:21.521603  316467 cri.go:89] found id: "b44fa08e1c948c8a2e74282b096d0d0f88dbea82e76db849be56ed398f3fe183"
	I1209 02:37:21.521608  316467 cri.go:89] found id: "048f1c30da0ecec62a1fcba7f690097c9e30ead84da2485e05d76879313b176f"
	I1209 02:37:21.521612  316467 cri.go:89] found id: "71d839a5d0175f6d17d7d3f55496772732092bcda33bd8ed81aa933ec7279dfa"
	I1209 02:37:21.521616  316467 cri.go:89] found id: "5e7dc88fe52e694684d7007065cba87c04d380ba1290283d9662ad6f91aaafe2"
	I1209 02:37:21.521620  316467 cri.go:89] found id: "53e2ef1a8035d284e5ca2d86b22685fdbc319dbfa71b2b00d3a4fda9676fdacd"
	I1209 02:37:21.521624  316467 cri.go:89] found id: "08b84802df75faab1ac51f0d9397731ef50a3cf06d6bc33889322842ab9894e6"
	I1209 02:37:21.521628  316467 cri.go:89] found id: "59648f3bd410e19a0b3346422e261893be00390058d6e433840a3d0576f9f237"
	I1209 02:37:21.521659  316467 cri.go:89] found id: "3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8"
	I1209 02:37:21.521665  316467 cri.go:89] found id: "e54085e8d51335921b3d7fe0b9a1d7d90a704d7634df52d9f90ba12ae61894cb"
	I1209 02:37:21.521670  316467 cri.go:89] found id: ""
	I1209 02:37:21.521716  316467 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:21.534371  316467 retry.go:31] will retry after 147.867701ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:21Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:37:21.682754  316467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:21.698559  316467 pause.go:52] kubelet running: false
	I1209 02:37:21.698607  316467 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:37:21.871819  316467 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:37:21.871893  316467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:21.942008  316467 cri.go:89] found id: "4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26"
	I1209 02:37:21.942033  316467 cri.go:89] found id: "5bcc90f3b2b85b5a813e4a6297bed0ba94510f88322bbc811d37c3b31e147ed6"
	I1209 02:37:21.942040  316467 cri.go:89] found id: "b44fa08e1c948c8a2e74282b096d0d0f88dbea82e76db849be56ed398f3fe183"
	I1209 02:37:21.942046  316467 cri.go:89] found id: "048f1c30da0ecec62a1fcba7f690097c9e30ead84da2485e05d76879313b176f"
	I1209 02:37:21.942050  316467 cri.go:89] found id: "71d839a5d0175f6d17d7d3f55496772732092bcda33bd8ed81aa933ec7279dfa"
	I1209 02:37:21.942055  316467 cri.go:89] found id: "5e7dc88fe52e694684d7007065cba87c04d380ba1290283d9662ad6f91aaafe2"
	I1209 02:37:21.942060  316467 cri.go:89] found id: "53e2ef1a8035d284e5ca2d86b22685fdbc319dbfa71b2b00d3a4fda9676fdacd"
	I1209 02:37:21.942064  316467 cri.go:89] found id: "08b84802df75faab1ac51f0d9397731ef50a3cf06d6bc33889322842ab9894e6"
	I1209 02:37:21.942069  316467 cri.go:89] found id: "59648f3bd410e19a0b3346422e261893be00390058d6e433840a3d0576f9f237"
	I1209 02:37:21.942078  316467 cri.go:89] found id: "3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8"
	I1209 02:37:21.942088  316467 cri.go:89] found id: "e54085e8d51335921b3d7fe0b9a1d7d90a704d7634df52d9f90ba12ae61894cb"
	I1209 02:37:21.942092  316467 cri.go:89] found id: ""
	I1209 02:37:21.942134  316467 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:21.955187  316467 retry.go:31] will retry after 481.27881ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:21Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:37:22.436786  316467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:22.451782  316467 pause.go:52] kubelet running: false
	I1209 02:37:22.451838  316467 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:37:22.638917  316467 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:37:22.638991  316467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:22.721730  316467 cri.go:89] found id: "4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26"
	I1209 02:37:22.721758  316467 cri.go:89] found id: "5bcc90f3b2b85b5a813e4a6297bed0ba94510f88322bbc811d37c3b31e147ed6"
	I1209 02:37:22.721766  316467 cri.go:89] found id: "b44fa08e1c948c8a2e74282b096d0d0f88dbea82e76db849be56ed398f3fe183"
	I1209 02:37:22.721771  316467 cri.go:89] found id: "048f1c30da0ecec62a1fcba7f690097c9e30ead84da2485e05d76879313b176f"
	I1209 02:37:22.721776  316467 cri.go:89] found id: "71d839a5d0175f6d17d7d3f55496772732092bcda33bd8ed81aa933ec7279dfa"
	I1209 02:37:22.721783  316467 cri.go:89] found id: "5e7dc88fe52e694684d7007065cba87c04d380ba1290283d9662ad6f91aaafe2"
	I1209 02:37:22.721787  316467 cri.go:89] found id: "53e2ef1a8035d284e5ca2d86b22685fdbc319dbfa71b2b00d3a4fda9676fdacd"
	I1209 02:37:22.721792  316467 cri.go:89] found id: "08b84802df75faab1ac51f0d9397731ef50a3cf06d6bc33889322842ab9894e6"
	I1209 02:37:22.721796  316467 cri.go:89] found id: "59648f3bd410e19a0b3346422e261893be00390058d6e433840a3d0576f9f237"
	I1209 02:37:22.721814  316467 cri.go:89] found id: "3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8"
	I1209 02:37:22.721823  316467 cri.go:89] found id: "e54085e8d51335921b3d7fe0b9a1d7d90a704d7634df52d9f90ba12ae61894cb"
	I1209 02:37:22.721827  316467 cri.go:89] found id: ""
	I1209 02:37:22.721873  316467 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:22.743150  316467 out.go:203] 
	W1209 02:37:22.744431  316467 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 02:37:22.744453  316467 out.go:285] * 
	* 
	W1209 02:37:22.750033  316467 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 02:37:22.751327  316467 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-512414 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-512414
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-512414:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1",
	        "Created": "2025-12-09T02:35:16.836170165Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:36:22.494356131Z",
	            "FinishedAt": "2025-12-09T02:36:21.673715352Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/hostname",
	        "HostsPath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/hosts",
	        "LogPath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1-json.log",
	        "Name": "/default-k8s-diff-port-512414",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-512414:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-512414",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1",
	                "LowerDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-512414",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-512414/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-512414",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-512414",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-512414",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2085610ae99455a8d4314ee98810112518faab8d94ef878ba1944fb3e443f4e",
	            "SandboxKey": "/var/run/docker/netns/a2085610ae99",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-512414": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e16439d105c69dbf592b83cbbc24d475e1a7bdde09cef9f521cc22e0f04ea46e",
	                    "EndpointID": "3b50da89fe436bb65f84c59833f1b93a119de79ebeeadfbba0821f57301ded9a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "32:5f:5f:b8:b0:39",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-512414",
	                        "eee17c4f2786"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414: exit status 2 (428.68083ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-512414 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-512414 logs -n 25: (1.120102528s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p cert-expiration-572052                                                                                                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p no-preload-185074 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-126117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-512414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p no-preload-185074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ stop    │ -p newest-cni-828614 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-828614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ image   │ newest-cni-828614 image list --format=json                                                                                                                                                                                                           │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ pause   │ -p newest-cni-828614 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p disable-driver-mounts-894253                                                                                                                                                                                                                      │ disable-driver-mounts-894253 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-485234           │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ old-k8s-version-126117 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p old-k8s-version-126117 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ default-k8s-diff-port-512414 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p default-k8s-diff-port-512414 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ delete  │ -p old-k8s-version-126117                                                                                                                                                                                                                            │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:37:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:37:06.265894  312861 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:37:06.266149  312861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:06.266159  312861 out.go:374] Setting ErrFile to fd 2...
	I1209 02:37:06.266163  312861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:06.266390  312861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:37:06.266890  312861 out.go:368] Setting JSON to false
	I1209 02:37:06.268011  312861 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4775,"bootTime":1765243051,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:37:06.268068  312861 start.go:143] virtualization: kvm guest
	I1209 02:37:06.269973  312861 out.go:179] * [embed-certs-485234] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:37:06.271239  312861 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:37:06.271260  312861 notify.go:221] Checking for updates...
	I1209 02:37:06.273331  312861 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:37:06.274481  312861 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:37:06.275572  312861 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:37:06.276773  312861 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:37:06.277728  312861 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:37:06.279204  312861 config.go:182] Loaded profile config "default-k8s-diff-port-512414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:06.279294  312861 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:37:06.279368  312861 config.go:182] Loaded profile config "old-k8s-version-126117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1209 02:37:06.279440  312861 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:37:06.303034  312861 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:37:06.303110  312861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:06.356600  312861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-09 02:37:06.347325006 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:06.356738  312861 docker.go:319] overlay module found
	I1209 02:37:06.359001  312861 out.go:179] * Using the docker driver based on user configuration
	I1209 02:37:06.359972  312861 start.go:309] selected driver: docker
	I1209 02:37:06.359986  312861 start.go:927] validating driver "docker" against <nil>
	I1209 02:37:06.360000  312861 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:37:06.360532  312861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:06.418200  312861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-09 02:37:06.408143545 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:06.418358  312861 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:37:06.418551  312861 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:37:06.419983  312861 out.go:179] * Using Docker driver with root privileges
	I1209 02:37:06.420941  312861 cni.go:84] Creating CNI manager for ""
	I1209 02:37:06.420995  312861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:06.421005  312861 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 02:37:06.421065  312861 start.go:353] cluster config:
	{Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:06.422178  312861 out.go:179] * Starting "embed-certs-485234" primary control-plane node in "embed-certs-485234" cluster
	I1209 02:37:06.423106  312861 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:37:06.424069  312861 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:37:06.424889  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:06.424931  312861 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:37:06.424943  312861 cache.go:65] Caching tarball of preloaded images
	I1209 02:37:06.424980  312861 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:37:06.425038  312861 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:37:06.425052  312861 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:37:06.425142  312861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json ...
	I1209 02:37:06.425166  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json: {Name:mk4ecce42013d99fe1ed5fecfa3a33c0e934834a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:06.444449  312861 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:37:06.444468  312861 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:37:06.444481  312861 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:37:06.444504  312861 start.go:360] acquireMachinesLock for embed-certs-485234: {Name:mk9b23f5c442a469a62d61ac899836b50beae7f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:37:06.444597  312861 start.go:364] duration metric: took 74.067µs to acquireMachinesLock for "embed-certs-485234"
	I1209 02:37:06.444619  312861 start.go:93] Provisioning new machine with config: &{Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:37:06.444720  312861 start.go:125] createHost starting for "" (driver="docker")
	W1209 02:37:02.634996  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:37:05.135565  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:37:05.746125  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:37:08.245123  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:37:07.633907  300341 pod_ready.go:94] pod "coredns-66bc5c9577-gtkkc" is "Ready"
	I1209 02:37:07.633932  300341 pod_ready.go:86] duration metric: took 34.504712821s for pod "coredns-66bc5c9577-gtkkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.636195  300341 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.639858  300341 pod_ready.go:94] pod "etcd-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.639883  300341 pod_ready.go:86] duration metric: took 3.667895ms for pod "etcd-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.641854  300341 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.645251  300341 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.645272  300341 pod_ready.go:86] duration metric: took 3.400654ms for pod "kube-apiserver-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.647046  300341 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.832888  300341 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.832916  300341 pod_ready.go:86] duration metric: took 185.849084ms for pod "kube-controller-manager-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.033001  300341 pod_ready.go:83] waiting for pod "kube-proxy-nkdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.433254  300341 pod_ready.go:94] pod "kube-proxy-nkdhm" is "Ready"
	I1209 02:37:08.433283  300341 pod_ready.go:86] duration metric: took 400.256248ms for pod "kube-proxy-nkdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.632462  300341 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:09.032519  300341 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:09.032544  300341 pod_ready.go:86] duration metric: took 400.052955ms for pod "kube-scheduler-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:09.032557  300341 pod_ready.go:40] duration metric: took 35.906617096s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:37:09.076201  300341 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 02:37:09.153412  300341 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-512414" cluster and "default" namespace by default
	I1209 02:37:06.446141  312861 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:37:06.446346  312861 start.go:159] libmachine.API.Create for "embed-certs-485234" (driver="docker")
	I1209 02:37:06.446376  312861 client.go:173] LocalClient.Create starting
	I1209 02:37:06.446433  312861 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:37:06.446463  312861 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:06.446481  312861 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:06.446530  312861 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:37:06.446551  312861 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:06.446560  312861 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:06.446913  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:37:06.462783  312861 cli_runner.go:211] docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:37:06.462837  312861 network_create.go:284] running [docker network inspect embed-certs-485234] to gather additional debugging logs...
	I1209 02:37:06.462851  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234
	W1209 02:37:06.477787  312861 cli_runner.go:211] docker network inspect embed-certs-485234 returned with exit code 1
	I1209 02:37:06.477816  312861 network_create.go:287] error running [docker network inspect embed-certs-485234]: docker network inspect embed-certs-485234: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-485234 not found
	I1209 02:37:06.477839  312861 network_create.go:289] output of [docker network inspect embed-certs-485234]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-485234 not found
	
	** /stderr **
	I1209 02:37:06.477923  312861 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:06.494719  312861 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:37:06.495379  312861 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:37:06.496115  312861 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:37:06.496652  312861 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e16439d105c6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:ee:5c:7c:6c:62} reservation:<nil>}
	I1209 02:37:06.497265  312861 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ecc05a83343c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:d2:77:3b:89:79} reservation:<nil>}
	I1209 02:37:06.498119  312861 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb0c90}
	I1209 02:37:06.498145  312861 network_create.go:124] attempt to create docker network embed-certs-485234 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1209 02:37:06.498186  312861 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-485234 embed-certs-485234
	I1209 02:37:06.545208  312861 network_create.go:108] docker network embed-certs-485234 192.168.94.0/24 created
	I1209 02:37:06.545234  312861 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-485234" container
	I1209 02:37:06.545311  312861 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:37:06.562656  312861 cli_runner.go:164] Run: docker volume create embed-certs-485234 --label name.minikube.sigs.k8s.io=embed-certs-485234 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:37:06.579351  312861 oci.go:103] Successfully created a docker volume embed-certs-485234
	I1209 02:37:06.579429  312861 cli_runner.go:164] Run: docker run --rm --name embed-certs-485234-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-485234 --entrypoint /usr/bin/test -v embed-certs-485234:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:37:06.968560  312861 oci.go:107] Successfully prepared a docker volume embed-certs-485234
	I1209 02:37:06.968678  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:06.968693  312861 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 02:37:06.968796  312861 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-485234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 02:37:10.828650  312861 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-485234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.859783742s)
	I1209 02:37:10.828684  312861 kic.go:203] duration metric: took 3.859986647s to extract preloaded images to volume ...
	W1209 02:37:10.828767  312861 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:37:10.828801  312861 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:37:10.828839  312861 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:37:10.885101  312861 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-485234 --name embed-certs-485234 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-485234 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-485234 --network embed-certs-485234 --ip 192.168.94.2 --volume embed-certs-485234:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 02:37:11.162572  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Running}}
	I1209 02:37:11.182739  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.201533  312861 cli_runner.go:164] Run: docker exec embed-certs-485234 stat /var/lib/dpkg/alternatives/iptables
	I1209 02:37:11.245603  312861 oci.go:144] the created container "embed-certs-485234" has a running status.
	I1209 02:37:11.245680  312861 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa...
	W1209 02:37:10.267075  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:37:12.746430  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:37:13.247465  302799 pod_ready.go:94] pod "coredns-7d764666f9-m6tbs" is "Ready"
	I1209 02:37:13.247521  302799 pod_ready.go:86] duration metric: took 34.507076064s for pod "coredns-7d764666f9-m6tbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.252380  302799 pod_ready.go:83] waiting for pod "etcd-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.257623  302799 pod_ready.go:94] pod "etcd-no-preload-185074" is "Ready"
	I1209 02:37:13.257682  302799 pod_ready.go:86] duration metric: took 5.27485ms for pod "etcd-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.259429  302799 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.263091  302799 pod_ready.go:94] pod "kube-apiserver-no-preload-185074" is "Ready"
	I1209 02:37:13.263117  302799 pod_ready.go:86] duration metric: took 3.670015ms for pod "kube-apiserver-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.264813  302799 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:11.537220  312861 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 02:37:11.563323  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.583790  312861 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 02:37:11.583816  312861 kic_runner.go:114] Args: [docker exec --privileged embed-certs-485234 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 02:37:11.626606  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.645123  312861 machine.go:94] provisionDockerMachine start ...
	I1209 02:37:11.645212  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.664460  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.664789  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.664805  312861 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:37:11.795359  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-485234
	
	I1209 02:37:11.795387  312861 ubuntu.go:182] provisioning hostname "embed-certs-485234"
	I1209 02:37:11.795448  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.814229  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.814492  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.814514  312861 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-485234 && echo "embed-certs-485234" | sudo tee /etc/hostname
	I1209 02:37:11.948171  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-485234
	
	I1209 02:37:11.948244  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.966144  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.966365  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.966384  312861 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-485234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-485234/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-485234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:37:12.090842  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:37:12.090872  312861 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:37:12.090923  312861 ubuntu.go:190] setting up certificates
	I1209 02:37:12.090933  312861 provision.go:84] configureAuth start
	I1209 02:37:12.090984  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.108441  312861 provision.go:143] copyHostCerts
	I1209 02:37:12.108498  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:37:12.108513  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:37:12.108581  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:37:12.108718  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:37:12.108731  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:37:12.108780  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:37:12.108915  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:37:12.108926  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:37:12.108962  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:37:12.109046  312861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.embed-certs-485234 san=[127.0.0.1 192.168.94.2 embed-certs-485234 localhost minikube]
	I1209 02:37:12.185770  312861 provision.go:177] copyRemoteCerts
	I1209 02:37:12.185823  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:37:12.185867  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.203781  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.297266  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:37:12.315682  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 02:37:12.332372  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 02:37:12.348767  312861 provision.go:87] duration metric: took 257.824432ms to configureAuth
	I1209 02:37:12.348791  312861 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:37:12.348966  312861 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:12.349051  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.367892  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:12.368130  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:12.368152  312861 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:37:12.631127  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:37:12.631150  312861 machine.go:97] duration metric: took 986.000884ms to provisionDockerMachine
	I1209 02:37:12.631160  312861 client.go:176] duration metric: took 6.184776828s to LocalClient.Create
	I1209 02:37:12.631178  312861 start.go:167] duration metric: took 6.184833791s to libmachine.API.Create "embed-certs-485234"
	I1209 02:37:12.631185  312861 start.go:293] postStartSetup for "embed-certs-485234" (driver="docker")
	I1209 02:37:12.631193  312861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:37:12.631247  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:37:12.631288  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.650047  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.745621  312861 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:37:12.749630  312861 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:37:12.749691  312861 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:37:12.749704  312861 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:37:12.749756  312861 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:37:12.749822  312861 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:37:12.749906  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:37:12.758040  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:12.779782  312861 start.go:296] duration metric: took 148.5859ms for postStartSetup
	I1209 02:37:12.780088  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.798780  312861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json ...
	I1209 02:37:12.799048  312861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:37:12.799087  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.816209  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.906142  312861 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:37:12.910519  312861 start.go:128] duration metric: took 6.465788374s to createHost
	I1209 02:37:12.910538  312861 start.go:83] releasing machines lock for "embed-certs-485234", held for 6.465929672s
	I1209 02:37:12.910606  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.928304  312861 ssh_runner.go:195] Run: cat /version.json
	I1209 02:37:12.928356  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.928375  312861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:37:12.928447  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.946358  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.946972  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:13.091177  312861 ssh_runner.go:195] Run: systemctl --version
	I1209 02:37:13.097600  312861 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:37:13.131258  312861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:37:13.135743  312861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:37:13.135810  312861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:37:13.162689  312861 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 02:37:13.162715  312861 start.go:496] detecting cgroup driver to use...
	I1209 02:37:13.162750  312861 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:37:13.162798  312861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:37:13.178717  312861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:37:13.190805  312861 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:37:13.190853  312861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:37:13.206264  312861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:37:13.222864  312861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:37:13.305814  312861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:37:13.390556  312861 docker.go:234] disabling docker service ...
	I1209 02:37:13.390674  312861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:37:13.409495  312861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:37:13.422267  312861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:37:13.506320  312861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:37:13.589113  312861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:37:13.600697  312861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:37:13.614485  312861 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:37:13.614532  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.624541  312861 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:37:13.624587  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.633049  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.641219  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.650011  312861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:37:13.657733  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.665900  312861 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.678728  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.686933  312861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:37:13.693823  312861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:37:13.700444  312861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:13.779960  312861 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:37:13.910038  312861 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:37:13.910103  312861 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:37:13.914205  312861 start.go:564] Will wait 60s for crictl version
	I1209 02:37:13.914265  312861 ssh_runner.go:195] Run: which crictl
	I1209 02:37:13.917709  312861 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:37:13.941238  312861 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:37:13.941311  312861 ssh_runner.go:195] Run: crio --version
	I1209 02:37:13.969399  312861 ssh_runner.go:195] Run: crio --version
	I1209 02:37:13.997525  312861 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 02:37:13.444584  302799 pod_ready.go:94] pod "kube-controller-manager-no-preload-185074" is "Ready"
	I1209 02:37:13.444613  302799 pod_ready.go:86] duration metric: took 179.781521ms for pod "kube-controller-manager-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.644581  302799 pod_ready.go:83] waiting for pod "kube-proxy-8jh88" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.044726  302799 pod_ready.go:94] pod "kube-proxy-8jh88" is "Ready"
	I1209 02:37:14.044754  302799 pod_ready.go:86] duration metric: took 400.15086ms for pod "kube-proxy-8jh88" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.243839  302799 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.644301  302799 pod_ready.go:94] pod "kube-scheduler-no-preload-185074" is "Ready"
	I1209 02:37:14.644322  302799 pod_ready.go:86] duration metric: took 400.457904ms for pod "kube-scheduler-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.644333  302799 pod_ready.go:40] duration metric: took 35.907468936s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:37:14.691366  302799 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1209 02:37:14.693696  302799 out.go:179] * Done! kubectl is now configured to use "no-preload-185074" cluster and "default" namespace by default
	I1209 02:37:13.998454  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:14.015735  312861 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1209 02:37:14.019587  312861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:14.029452  312861 kubeadm.go:884] updating cluster {Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:37:14.029561  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:14.029613  312861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:14.062629  312861 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:14.062664  312861 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:37:14.062704  312861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:14.087930  312861 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:14.087950  312861 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:37:14.087958  312861 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1209 02:37:14.088051  312861 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-485234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:37:14.088114  312861 ssh_runner.go:195] Run: crio config
	I1209 02:37:14.133509  312861 cni.go:84] Creating CNI manager for ""
	I1209 02:37:14.133535  312861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:14.133556  312861 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:37:14.133578  312861 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-485234 NodeName:embed-certs-485234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:37:14.133735  312861 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-485234"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:37:14.133794  312861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 02:37:14.141697  312861 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:37:14.141757  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:37:14.149416  312861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1209 02:37:14.162206  312861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:37:14.177373  312861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1209 02:37:14.189424  312861 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:37:14.192881  312861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:14.201952  312861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:14.282853  312861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:37:14.304730  312861 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234 for IP: 192.168.94.2
	I1209 02:37:14.304752  312861 certs.go:195] generating shared ca certs ...
	I1209 02:37:14.304774  312861 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.304940  312861 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:37:14.305016  312861 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:37:14.305033  312861 certs.go:257] generating profile certs ...
	I1209 02:37:14.305100  312861 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key
	I1209 02:37:14.305120  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt with IP's: []
	I1209 02:37:14.359436  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt ...
	I1209 02:37:14.359461  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt: {Name:mkd2687220e2c1a496f0919e5b4ee3ae985b0d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.359653  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key ...
	I1209 02:37:14.359668  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key: {Name:mk9eda0520f2cbbe6316507c37cd6f28fc511268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.359822  312861 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20
	I1209 02:37:14.359847  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1209 02:37:14.444770  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 ...
	I1209 02:37:14.444793  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20: {Name:mk94bd2fac7c7e957c0ee327319c5c1e8a6301f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.444968  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20 ...
	I1209 02:37:14.444991  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20: {Name:mkacd03a1ebe1fb35635f22c6c191b2975875de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.445113  312861 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt
	I1209 02:37:14.445190  312861 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key
	I1209 02:37:14.445244  312861 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key
	I1209 02:37:14.445259  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt with IP's: []
	I1209 02:37:14.560806  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt ...
	I1209 02:37:14.560826  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt: {Name:mke7ad5eda062e0b1092e0004408a09aa647aeea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.560983  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key ...
	I1209 02:37:14.561002  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key: {Name:mk93c4daac2f0f9d1f8c2f6e132f0bae11b524ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.561200  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:37:14.561241  312861 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:37:14.561252  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:37:14.561274  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:37:14.561307  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:37:14.561340  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:37:14.561405  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:14.561980  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:37:14.580295  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:37:14.597083  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:37:14.613685  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:37:14.630255  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 02:37:14.648077  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:37:14.666598  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:37:14.683845  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:37:14.701559  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:37:14.724314  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:37:14.741496  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:37:14.760427  312861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:37:14.773786  312861 ssh_runner.go:195] Run: openssl version
	I1209 02:37:14.779710  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.787281  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:37:14.795901  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.799927  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.799992  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.839135  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:14.847352  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145522.pem /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:14.854769  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.861800  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:37:14.869148  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.872807  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.872857  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.906788  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:37:14.913728  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 02:37:14.920733  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.928244  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:37:14.935526  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.939120  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.939164  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.983518  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:37:14.991697  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/51391683.0
	I1209 02:37:15.000864  312861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:37:15.005011  312861 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 02:37:15.005053  312861 kubeadm.go:401] StartCluster: {Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:15.005116  312861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:37:15.005173  312861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:37:15.035472  312861 cri.go:89] found id: ""
	I1209 02:37:15.035518  312861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:37:15.045322  312861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 02:37:15.053145  312861 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 02:37:15.053203  312861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 02:37:15.061178  312861 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 02:37:15.061197  312861 kubeadm.go:158] found existing configuration files:
	
	I1209 02:37:15.061235  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 02:37:15.068770  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 02:37:15.068824  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 02:37:15.075842  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 02:37:15.083627  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 02:37:15.083711  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 02:37:15.091022  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 02:37:15.098306  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 02:37:15.098366  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 02:37:15.105103  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 02:37:15.112368  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 02:37:15.112418  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 02:37:15.119369  312861 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 02:37:15.155406  312861 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 02:37:15.155454  312861 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 02:37:15.189920  312861 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 02:37:15.190010  312861 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 02:37:15.190083  312861 kubeadm.go:319] OS: Linux
	I1209 02:37:15.190144  312861 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 02:37:15.190210  312861 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 02:37:15.190296  312861 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 02:37:15.190379  312861 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 02:37:15.190454  312861 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 02:37:15.190527  312861 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 02:37:15.190604  312861 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 02:37:15.190702  312861 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 02:37:15.249252  312861 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 02:37:15.249405  312861 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 02:37:15.249583  312861 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 02:37:15.256114  312861 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 02:37:15.259205  312861 out.go:252]   - Generating certificates and keys ...
	I1209 02:37:15.259301  312861 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 02:37:15.259380  312861 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 02:37:15.555393  312861 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 02:37:15.791444  312861 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 02:37:16.204198  312861 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 02:37:16.347360  312861 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 02:37:16.874857  312861 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 02:37:16.875048  312861 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-485234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1209 02:37:17.314689  312861 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 02:37:17.314865  312861 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-485234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1209 02:37:17.499551  312861 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 02:37:17.696286  312861 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 02:37:17.984705  312861 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 02:37:17.984811  312861 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 02:37:18.173479  312861 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 02:37:18.852948  312861 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 02:37:19.295701  312861 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 02:37:19.424695  312861 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 02:37:19.612418  312861 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 02:37:19.613112  312861 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 02:37:19.616719  312861 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 02:37:19.618181  312861 out.go:252]   - Booting up control plane ...
	I1209 02:37:19.618275  312861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 02:37:19.618393  312861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 02:37:19.619018  312861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 02:37:19.649026  312861 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 02:37:19.649149  312861 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 02:37:19.657257  312861 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 02:37:19.657507  312861 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 02:37:19.657567  312861 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 02:37:19.759620  312861 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 02:37:19.759784  312861 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 02:37:20.761316  312861 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001746054s
	I1209 02:37:20.765776  312861 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 02:37:20.765912  312861 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1209 02:37:20.766025  312861 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 02:37:20.766123  312861 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Dec 09 02:36:53 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:36:53.828789902Z" level=info msg="Started container" PID=1765 containerID=526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper id=0f72fc4f-a8b2-4395-8a49-1086eb16ef3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d181da0d644d527c0a4e8fb28e50439b85141bf50de780e61363b086a1998e8
	Dec 09 02:36:54 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:36:54.41013527Z" level=info msg="Removing container: 8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11" id=1ecf41c2-9cd3-4ceb-9b48-c3a53d37dccf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:36:54 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:36:54.42003985Z" level=info msg="Removed container 8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper" id=1ecf41c2-9cd3-4ceb-9b48-c3a53d37dccf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.43436926Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=65a4c159-aeb1-4e30-9475-f2cb736d759b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.435392982Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=19d91470-4b32-4578-9ded-b68cb9279b67 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.436531798Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4a2dbb33-dbf4-41db-b803-f5df96266045 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.436705922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.442180098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.44238818Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cf0809ab71c619b2c152fe52fda4ff2a8c55131c68e5d1a053cf146db5923931/merged/etc/passwd: no such file or directory"
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.442425197Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cf0809ab71c619b2c152fe52fda4ff2a8c55131c68e5d1a053cf146db5923931/merged/etc/group: no such file or directory"
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.442839977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.468939083Z" level=info msg="Created container 4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26: kube-system/storage-provisioner/storage-provisioner" id=4a2dbb33-dbf4-41db-b803-f5df96266045 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.469516634Z" level=info msg="Starting container: 4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26" id=eb3e0514-a067-4b34-91b8-fc90ccd99493 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.471504174Z" level=info msg="Started container" PID=1783 containerID=4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26 description=kube-system/storage-provisioner/storage-provisioner id=eb3e0514-a067-4b34-91b8-fc90ccd99493 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b784636ec02d0254db0cfdf9b1a6cdfa54b38a43dfa8d91e75da6bca85d4c34
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.275376933Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67de4005-e9ad-4c46-9ff1-1e4892f34039 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.276405066Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d1aaa5f4-950b-4e8d-a46d-45e9fddce66b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.277566996Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper" id=b7478a17-a13b-4979-989d-310f98c402df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.277755781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.284449887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.284961072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.313840449Z" level=info msg="Created container 3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper" id=b7478a17-a13b-4979-989d-310f98c402df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.314474171Z" level=info msg="Starting container: 3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8" id=99dea247-056b-4522-a398-8caaed4565e1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.316603802Z" level=info msg="Started container" PID=1818 containerID=3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper id=99dea247-056b-4522-a398-8caaed4565e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d181da0d644d527c0a4e8fb28e50439b85141bf50de780e61363b086a1998e8
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.478546759Z" level=info msg="Removing container: 526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88" id=28f0ae3c-e0a0-44c4-8b83-f7523f13e188 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.488121351Z" level=info msg="Removed container 526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper" id=28f0ae3c-e0a0-44c4-8b83-f7523f13e188 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	3a149228f14b9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   1d181da0d644d       dashboard-metrics-scraper-6ffb444bf9-5kpdg             kubernetes-dashboard
	4f758b488db40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   2b784636ec02d       storage-provisioner                                    kube-system
	e54085e8d5133       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   39df424f96013       kubernetes-dashboard-855c9754f9-ktttw                  kubernetes-dashboard
	5bcc90f3b2b85       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   771d0965fab59       coredns-66bc5c9577-gtkkc                               kube-system
	402fd6ba3937b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   882ad7154fc22       busybox                                                default
	b44fa08e1c948       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   76108ede713f0       kube-proxy-nkdhm                                       kube-system
	048f1c30da0ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   2b784636ec02d       storage-provisioner                                    kube-system
	71d839a5d0175       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   0e5769c41efea       kindnet-5hz5b                                          kube-system
	5e7dc88fe52e6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   1ca2937fceae0       etcd-default-k8s-diff-port-512414                      kube-system
	53e2ef1a8035d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   6f9eed3928fd7       kube-controller-manager-default-k8s-diff-port-512414   kube-system
	08b84802df75f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   9ff98d0a11965       kube-apiserver-default-k8s-diff-port-512414            kube-system
	59648f3bd410e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   28acf9be160c4       kube-scheduler-default-k8s-diff-port-512414            kube-system
	
	
	==> coredns [5bcc90f3b2b85b5a813e4a6297bed0ba94510f88322bbc811d37c3b31e147ed6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51245 - 65356 "HINFO IN 3221666007643804166.6251389351970285842. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.08486378s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-512414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-512414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=default-k8s-diff-port-512414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_35_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:35:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-512414
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:37:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:37:12 +0000   Tue, 09 Dec 2025 02:35:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:37:12 +0000   Tue, 09 Dec 2025 02:35:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:37:12 +0000   Tue, 09 Dec 2025 02:35:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:37:12 +0000   Tue, 09 Dec 2025 02:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-512414
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                73837a98-9d7d-40ab-bb93-0a67d7e98624
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-gtkkc                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-default-k8s-diff-port-512414                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-5hz5b                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-512414             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-512414    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-nkdhm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-512414             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-5kpdg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ktttw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node default-k8s-diff-port-512414 event: Registered Node default-k8s-diff-port-512414 in Controller
	  Normal  NodeReady                93s                kubelet          Node default-k8s-diff-port-512414 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node default-k8s-diff-port-512414 event: Registered Node default-k8s-diff-port-512414 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [5e7dc88fe52e694684d7007065cba87c04d380ba1290283d9662ad6f91aaafe2] <==
	{"level":"warn","ts":"2025-12-09T02:36:31.114426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.123312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.135304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.143359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.150273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.164649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.172374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.183722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.192779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.200791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.207249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.214893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.223512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.230181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.237488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.244883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.252097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.260067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.267903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.274620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.281844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.300455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.307378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.314253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.371805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38028","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:37:24 up  1:19,  0 user,  load average: 3.48, 2.67, 1.91
	Linux default-k8s-diff-port-512414 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [71d839a5d0175f6d17d7d3f55496772732092bcda33bd8ed81aa933ec7279dfa] <==
	I1209 02:36:32.865912       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:36:32.866164       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1209 02:36:32.866322       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:36:32.866343       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:36:32.866371       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:36:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:36:33.070443       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:36:33.070473       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:36:33.070484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:36:33.071785       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:36:33.462051       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:36:33.462328       1 metrics.go:72] Registering metrics
	I1209 02:36:33.462396       1 controller.go:711] "Syncing nftables rules"
	I1209 02:36:43.070934       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:36:43.071009       1 main.go:301] handling current node
	I1209 02:36:53.075721       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:36:53.075783       1 main.go:301] handling current node
	I1209 02:37:03.070252       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:37:03.070298       1 main.go:301] handling current node
	I1209 02:37:13.070609       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:37:13.070675       1 main.go:301] handling current node
	I1209 02:37:23.073739       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:37:23.074368       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08b84802df75faab1ac51f0d9397731ef50a3cf06d6bc33889322842ab9894e6] <==
	I1209 02:36:31.928248       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 02:36:31.928268       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 02:36:31.928889       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1209 02:36:31.929026       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1209 02:36:31.929565       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 02:36:31.929834       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1209 02:36:31.929895       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1209 02:36:31.932684       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1209 02:36:31.941792       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1209 02:36:31.942546       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 02:36:31.947176       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 02:36:31.970937       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:36:31.973820       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 02:36:31.974003       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 02:36:32.280944       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:36:32.329283       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:36:32.358983       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:36:32.379726       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:36:32.392811       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:36:32.456724       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.22.56"}
	I1209 02:36:32.474023       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.242.15"}
	I1209 02:36:32.833833       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:36:35.617312       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:36:35.666854       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:36:35.717522       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [53e2ef1a8035d284e5ca2d86b22685fdbc319dbfa71b2b00d3a4fda9676fdacd] <==
	I1209 02:36:35.245457       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1209 02:36:35.245487       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1209 02:36:35.245497       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1209 02:36:35.245504       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 02:36:35.247664       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 02:36:35.251970       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 02:36:35.263391       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1209 02:36:35.263415       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 02:36:35.263466       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 02:36:35.263490       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 02:36:35.263493       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:36:35.263491       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1209 02:36:35.263572       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 02:36:35.263598       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 02:36:35.263838       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1209 02:36:35.264092       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1209 02:36:35.265984       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 02:36:35.269219       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:36:35.272481       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:36:35.280656       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 02:36:35.283981       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:36:35.285063       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:36:35.285077       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1209 02:36:35.285102       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1209 02:36:35.287239       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [b44fa08e1c948c8a2e74282b096d0d0f88dbea82e76db849be56ed398f3fe183] <==
	I1209 02:36:32.656438       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:36:32.714975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:36:32.816109       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:36:32.816148       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1209 02:36:32.816237       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:36:32.838423       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:36:32.838487       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:36:32.844976       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:36:32.845424       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:36:32.845510       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:32.847926       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:36:32.848006       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:36:32.847840       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:36:32.848727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:36:32.849118       1 config.go:200] "Starting service config controller"
	I1209 02:36:32.849135       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:36:32.849463       1 config.go:309] "Starting node config controller"
	I1209 02:36:32.849517       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:36:32.849526       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:36:32.948498       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:36:32.950257       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:36:32.951488       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [59648f3bd410e19a0b3346422e261893be00390058d6e433840a3d0576f9f237] <==
	I1209 02:36:30.753312       1 serving.go:386] Generated self-signed cert in-memory
	I1209 02:36:32.458882       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 02:36:32.458908       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:32.466454       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1209 02:36:32.466541       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1209 02:36:32.466605       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:32.466716       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:32.466621       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1209 02:36:32.466798       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1209 02:36:32.466889       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:36:32.467674       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:36:32.567520       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1209 02:36:32.567543       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:32.567734       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 09 02:36:35 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:35.998222     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a2d8f564-44f9-4bad-8be1-7ea025ad2cf4-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ktttw\" (UID: \"a2d8f564-44f9-4bad-8be1-7ea025ad2cf4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ktttw"
	Dec 09 02:36:35 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:35.998249     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr7k8\" (UniqueName: \"kubernetes.io/projected/07a7e4be-d8c0-44e3-8c59-654c1a33b3c3-kube-api-access-nr7k8\") pod \"dashboard-metrics-scraper-6ffb444bf9-5kpdg\" (UID: \"07a7e4be-d8c0-44e3-8c59-654c1a33b3c3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg"
	Dec 09 02:36:37 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:37.561949     725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 09 02:36:42 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:42.726029     725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ktttw" podStartSLOduration=4.015708514 podStartE2EDuration="7.726006374s" podCreationTimestamp="2025-12-09 02:36:35 +0000 UTC" firstStartedPulling="2025-12-09 02:36:36.20424153 +0000 UTC m=+7.100038336" lastFinishedPulling="2025-12-09 02:36:39.914539379 +0000 UTC m=+10.810336196" observedRunningTime="2025-12-09 02:36:40.373171984 +0000 UTC m=+11.268968809" watchObservedRunningTime="2025-12-09 02:36:42.726006374 +0000 UTC m=+13.621803199"
	Dec 09 02:36:43 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:43.373179     725 scope.go:117] "RemoveContainer" containerID="77b373dcc10b3a8129df23b7ab4d13733cfb558de33889c1fe5ab5c1b0e540bc"
	Dec 09 02:36:44 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:44.377830     725 scope.go:117] "RemoveContainer" containerID="77b373dcc10b3a8129df23b7ab4d13733cfb558de33889c1fe5ab5c1b0e540bc"
	Dec 09 02:36:44 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:44.378024     725 scope.go:117] "RemoveContainer" containerID="8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11"
	Dec 09 02:36:44 default-k8s-diff-port-512414 kubelet[725]: E1209 02:36:44.378227     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5kpdg_kubernetes-dashboard(07a7e4be-d8c0-44e3-8c59-654c1a33b3c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg" podUID="07a7e4be-d8c0-44e3-8c59-654c1a33b3c3"
	Dec 09 02:36:45 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:45.384664     725 scope.go:117] "RemoveContainer" containerID="8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11"
	Dec 09 02:36:45 default-k8s-diff-port-512414 kubelet[725]: E1209 02:36:45.384896     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5kpdg_kubernetes-dashboard(07a7e4be-d8c0-44e3-8c59-654c1a33b3c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg" podUID="07a7e4be-d8c0-44e3-8c59-654c1a33b3c3"
	Dec 09 02:36:53 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:53.787431     725 scope.go:117] "RemoveContainer" containerID="8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11"
	Dec 09 02:36:54 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:54.408724     725 scope.go:117] "RemoveContainer" containerID="8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11"
	Dec 09 02:36:54 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:54.408976     725 scope.go:117] "RemoveContainer" containerID="526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88"
	Dec 09 02:36:54 default-k8s-diff-port-512414 kubelet[725]: E1209 02:36:54.409203     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5kpdg_kubernetes-dashboard(07a7e4be-d8c0-44e3-8c59-654c1a33b3c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg" podUID="07a7e4be-d8c0-44e3-8c59-654c1a33b3c3"
	Dec 09 02:37:03 default-k8s-diff-port-512414 kubelet[725]: I1209 02:37:03.434014     725 scope.go:117] "RemoveContainer" containerID="048f1c30da0ecec62a1fcba7f690097c9e30ead84da2485e05d76879313b176f"
	Dec 09 02:37:03 default-k8s-diff-port-512414 kubelet[725]: I1209 02:37:03.787189     725 scope.go:117] "RemoveContainer" containerID="526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88"
	Dec 09 02:37:03 default-k8s-diff-port-512414 kubelet[725]: E1209 02:37:03.787352     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5kpdg_kubernetes-dashboard(07a7e4be-d8c0-44e3-8c59-654c1a33b3c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg" podUID="07a7e4be-d8c0-44e3-8c59-654c1a33b3c3"
	Dec 09 02:37:18 default-k8s-diff-port-512414 kubelet[725]: I1209 02:37:18.274846     725 scope.go:117] "RemoveContainer" containerID="526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88"
	Dec 09 02:37:18 default-k8s-diff-port-512414 kubelet[725]: I1209 02:37:18.477350     725 scope.go:117] "RemoveContainer" containerID="526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88"
	Dec 09 02:37:18 default-k8s-diff-port-512414 kubelet[725]: I1209 02:37:18.477590     725 scope.go:117] "RemoveContainer" containerID="3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8"
	Dec 09 02:37:18 default-k8s-diff-port-512414 kubelet[725]: E1209 02:37:18.477811     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5kpdg_kubernetes-dashboard(07a7e4be-d8c0-44e3-8c59-654c1a33b3c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg" podUID="07a7e4be-d8c0-44e3-8c59-654c1a33b3c3"
	Dec 09 02:37:21 default-k8s-diff-port-512414 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:37:21 default-k8s-diff-port-512414 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:37:21 default-k8s-diff-port-512414 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 02:37:21 default-k8s-diff-port-512414 systemd[1]: kubelet.service: Consumed 1.721s CPU time.
	
	
	==> kubernetes-dashboard [e54085e8d51335921b3d7fe0b9a1d7d90a704d7634df52d9f90ba12ae61894cb] <==
	2025/12/09 02:36:39 Using namespace: kubernetes-dashboard
	2025/12/09 02:36:39 Using in-cluster config to connect to apiserver
	2025/12/09 02:36:39 Using secret token for csrf signing
	2025/12/09 02:36:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/09 02:36:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/09 02:36:39 Successful initial request to the apiserver, version: v1.34.2
	2025/12/09 02:36:39 Generating JWE encryption key
	2025/12/09 02:36:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/09 02:36:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/09 02:36:40 Initializing JWE encryption key from synchronized object
	2025/12/09 02:36:40 Creating in-cluster Sidecar client
	2025/12/09 02:36:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:36:40 Serving insecurely on HTTP port: 9090
	2025/12/09 02:37:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:36:39 Starting overwatch
	
	
	==> storage-provisioner [048f1c30da0ecec62a1fcba7f690097c9e30ead84da2485e05d76879313b176f] <==
	I1209 02:36:32.617701       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:37:02.621421       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26] <==
	I1209 02:37:03.484302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:37:03.492401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:37:03.492440       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:37:03.494500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:06.950259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:11.210550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:14.809513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:17.862733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:20.885518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:20.890484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:37:20.890758       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:37:20.891084       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-512414_cbd2fb55-8b0a-4135-9ec8-68a93c594802!
	I1209 02:37:20.891101       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2818b0d6-e891-4733-8290-62f4a6a50242", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-512414_cbd2fb55-8b0a-4135-9ec8-68a93c594802 became leader
	W1209 02:37:20.893623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:20.897472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:37:20.994313       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-512414_cbd2fb55-8b0a-4135-9ec8-68a93c594802!
	W1209 02:37:22.901689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:22.906227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414: exit status 2 (357.000597ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-512414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-512414
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-512414:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1",
	        "Created": "2025-12-09T02:35:16.836170165Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:36:22.494356131Z",
	            "FinishedAt": "2025-12-09T02:36:21.673715352Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/hostname",
	        "HostsPath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/hosts",
	        "LogPath": "/var/lib/docker/containers/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1/eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1-json.log",
	        "Name": "/default-k8s-diff-port-512414",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-512414:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-512414",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eee17c4f2786c7e444545b4ab48eee3a165f3e7008f0c69b1c84bd3177055ae1",
	                "LowerDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b174599ecfd3c7dfd2bb2141720f9799af76ccf61080b64fd9a9389105f7dc4f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-512414",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-512414/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-512414",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-512414",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-512414",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2085610ae99455a8d4314ee98810112518faab8d94ef878ba1944fb3e443f4e",
	            "SandboxKey": "/var/run/docker/netns/a2085610ae99",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-512414": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e16439d105c69dbf592b83cbbc24d475e1a7bdde09cef9f521cc22e0f04ea46e",
	                    "EndpointID": "3b50da89fe436bb65f84c59833f1b93a119de79ebeeadfbba0821f57301ded9a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "32:5f:5f:b8:b0:39",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-512414",
	                        "eee17c4f2786"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414: exit status 2 (354.151741ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-512414 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-512414 logs -n 25: (1.195419374s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p cert-expiration-572052                                                                                                                                                                                                                            │ cert-expiration-572052       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-185074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ stop    │ -p no-preload-185074 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-126117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-512414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p no-preload-185074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ stop    │ -p newest-cni-828614 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-828614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ image   │ newest-cni-828614 image list --format=json                                                                                                                                                                                                           │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ pause   │ -p newest-cni-828614 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p disable-driver-mounts-894253                                                                                                                                                                                                                      │ disable-driver-mounts-894253 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-485234           │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ old-k8s-version-126117 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p old-k8s-version-126117 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ default-k8s-diff-port-512414 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p default-k8s-diff-port-512414 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ delete  │ -p old-k8s-version-126117                                                                                                                                                                                                                            │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:37:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:37:06.265894  312861 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:37:06.266149  312861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:06.266159  312861 out.go:374] Setting ErrFile to fd 2...
	I1209 02:37:06.266163  312861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:06.266390  312861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:37:06.266890  312861 out.go:368] Setting JSON to false
	I1209 02:37:06.268011  312861 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4775,"bootTime":1765243051,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:37:06.268068  312861 start.go:143] virtualization: kvm guest
	I1209 02:37:06.269973  312861 out.go:179] * [embed-certs-485234] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:37:06.271239  312861 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:37:06.271260  312861 notify.go:221] Checking for updates...
	I1209 02:37:06.273331  312861 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:37:06.274481  312861 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:37:06.275572  312861 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:37:06.276773  312861 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:37:06.277728  312861 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:37:06.279204  312861 config.go:182] Loaded profile config "default-k8s-diff-port-512414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:06.279294  312861 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:37:06.279368  312861 config.go:182] Loaded profile config "old-k8s-version-126117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1209 02:37:06.279440  312861 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:37:06.303034  312861 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:37:06.303110  312861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:06.356600  312861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-09 02:37:06.347325006 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:06.356738  312861 docker.go:319] overlay module found
	I1209 02:37:06.359001  312861 out.go:179] * Using the docker driver based on user configuration
	I1209 02:37:06.359972  312861 start.go:309] selected driver: docker
	I1209 02:37:06.359986  312861 start.go:927] validating driver "docker" against <nil>
	I1209 02:37:06.360000  312861 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:37:06.360532  312861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:06.418200  312861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-09 02:37:06.408143545 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:06.418358  312861 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:37:06.418551  312861 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:37:06.419983  312861 out.go:179] * Using Docker driver with root privileges
	I1209 02:37:06.420941  312861 cni.go:84] Creating CNI manager for ""
	I1209 02:37:06.420995  312861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:06.421005  312861 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 02:37:06.421065  312861 start.go:353] cluster config:
	{Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:06.422178  312861 out.go:179] * Starting "embed-certs-485234" primary control-plane node in "embed-certs-485234" cluster
	I1209 02:37:06.423106  312861 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:37:06.424069  312861 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:37:06.424889  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:06.424931  312861 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:37:06.424943  312861 cache.go:65] Caching tarball of preloaded images
	I1209 02:37:06.424980  312861 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:37:06.425038  312861 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:37:06.425052  312861 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:37:06.425142  312861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json ...
	I1209 02:37:06.425166  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json: {Name:mk4ecce42013d99fe1ed5fecfa3a33c0e934834a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:06.444449  312861 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:37:06.444468  312861 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:37:06.444481  312861 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:37:06.444504  312861 start.go:360] acquireMachinesLock for embed-certs-485234: {Name:mk9b23f5c442a469a62d61ac899836b50beae7f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:37:06.444597  312861 start.go:364] duration metric: took 74.067µs to acquireMachinesLock for "embed-certs-485234"
	I1209 02:37:06.444619  312861 start.go:93] Provisioning new machine with config: &{Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:37:06.444720  312861 start.go:125] createHost starting for "" (driver="docker")
	W1209 02:37:02.634996  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:37:05.135565  300341 pod_ready.go:104] pod "coredns-66bc5c9577-gtkkc" is not "Ready", error: <nil>
	W1209 02:37:05.746125  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:37:08.245123  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:37:07.633907  300341 pod_ready.go:94] pod "coredns-66bc5c9577-gtkkc" is "Ready"
	I1209 02:37:07.633932  300341 pod_ready.go:86] duration metric: took 34.504712821s for pod "coredns-66bc5c9577-gtkkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.636195  300341 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.639858  300341 pod_ready.go:94] pod "etcd-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.639883  300341 pod_ready.go:86] duration metric: took 3.667895ms for pod "etcd-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.641854  300341 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.645251  300341 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.645272  300341 pod_ready.go:86] duration metric: took 3.400654ms for pod "kube-apiserver-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.647046  300341 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:07.832888  300341 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:07.832916  300341 pod_ready.go:86] duration metric: took 185.849084ms for pod "kube-controller-manager-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.033001  300341 pod_ready.go:83] waiting for pod "kube-proxy-nkdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.433254  300341 pod_ready.go:94] pod "kube-proxy-nkdhm" is "Ready"
	I1209 02:37:08.433283  300341 pod_ready.go:86] duration metric: took 400.256248ms for pod "kube-proxy-nkdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:08.632462  300341 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:09.032519  300341 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-512414" is "Ready"
	I1209 02:37:09.032544  300341 pod_ready.go:86] duration metric: took 400.052955ms for pod "kube-scheduler-default-k8s-diff-port-512414" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:09.032557  300341 pod_ready.go:40] duration metric: took 35.906617096s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:37:09.076201  300341 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 02:37:09.153412  300341 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-512414" cluster and "default" namespace by default
	I1209 02:37:06.446141  312861 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:37:06.446346  312861 start.go:159] libmachine.API.Create for "embed-certs-485234" (driver="docker")
	I1209 02:37:06.446376  312861 client.go:173] LocalClient.Create starting
	I1209 02:37:06.446433  312861 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:37:06.446463  312861 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:06.446481  312861 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:06.446530  312861 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:37:06.446551  312861 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:06.446560  312861 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:06.446913  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:37:06.462783  312861 cli_runner.go:211] docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:37:06.462837  312861 network_create.go:284] running [docker network inspect embed-certs-485234] to gather additional debugging logs...
	I1209 02:37:06.462851  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234
	W1209 02:37:06.477787  312861 cli_runner.go:211] docker network inspect embed-certs-485234 returned with exit code 1
	I1209 02:37:06.477816  312861 network_create.go:287] error running [docker network inspect embed-certs-485234]: docker network inspect embed-certs-485234: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-485234 not found
	I1209 02:37:06.477839  312861 network_create.go:289] output of [docker network inspect embed-certs-485234]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-485234 not found
	
	** /stderr **
	I1209 02:37:06.477923  312861 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:06.494719  312861 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:37:06.495379  312861 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:37:06.496115  312861 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:37:06.496652  312861 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e16439d105c6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:ee:5c:7c:6c:62} reservation:<nil>}
	I1209 02:37:06.497265  312861 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ecc05a83343c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:d2:77:3b:89:79} reservation:<nil>}
	I1209 02:37:06.498119  312861 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb0c90}
	I1209 02:37:06.498145  312861 network_create.go:124] attempt to create docker network embed-certs-485234 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1209 02:37:06.498186  312861 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-485234 embed-certs-485234
	I1209 02:37:06.545208  312861 network_create.go:108] docker network embed-certs-485234 192.168.94.0/24 created
	I1209 02:37:06.545234  312861 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-485234" container
	I1209 02:37:06.545311  312861 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:37:06.562656  312861 cli_runner.go:164] Run: docker volume create embed-certs-485234 --label name.minikube.sigs.k8s.io=embed-certs-485234 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:37:06.579351  312861 oci.go:103] Successfully created a docker volume embed-certs-485234
	I1209 02:37:06.579429  312861 cli_runner.go:164] Run: docker run --rm --name embed-certs-485234-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-485234 --entrypoint /usr/bin/test -v embed-certs-485234:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:37:06.968560  312861 oci.go:107] Successfully prepared a docker volume embed-certs-485234
	I1209 02:37:06.968678  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:06.968693  312861 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 02:37:06.968796  312861 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-485234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 02:37:10.828650  312861 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-485234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.859783742s)
	I1209 02:37:10.828684  312861 kic.go:203] duration metric: took 3.859986647s to extract preloaded images to volume ...
	W1209 02:37:10.828767  312861 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:37:10.828801  312861 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:37:10.828839  312861 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:37:10.885101  312861 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-485234 --name embed-certs-485234 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-485234 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-485234 --network embed-certs-485234 --ip 192.168.94.2 --volume embed-certs-485234:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 02:37:11.162572  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Running}}
	I1209 02:37:11.182739  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.201533  312861 cli_runner.go:164] Run: docker exec embed-certs-485234 stat /var/lib/dpkg/alternatives/iptables
	I1209 02:37:11.245603  312861 oci.go:144] the created container "embed-certs-485234" has a running status.
	I1209 02:37:11.245680  312861 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa...
	W1209 02:37:10.267075  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	W1209 02:37:12.746430  302799 pod_ready.go:104] pod "coredns-7d764666f9-m6tbs" is not "Ready", error: <nil>
	I1209 02:37:13.247465  302799 pod_ready.go:94] pod "coredns-7d764666f9-m6tbs" is "Ready"
	I1209 02:37:13.247521  302799 pod_ready.go:86] duration metric: took 34.507076064s for pod "coredns-7d764666f9-m6tbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.252380  302799 pod_ready.go:83] waiting for pod "etcd-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.257623  302799 pod_ready.go:94] pod "etcd-no-preload-185074" is "Ready"
	I1209 02:37:13.257682  302799 pod_ready.go:86] duration metric: took 5.27485ms for pod "etcd-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.259429  302799 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.263091  302799 pod_ready.go:94] pod "kube-apiserver-no-preload-185074" is "Ready"
	I1209 02:37:13.263117  302799 pod_ready.go:86] duration metric: took 3.670015ms for pod "kube-apiserver-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.264813  302799 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:11.537220  312861 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 02:37:11.563323  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.583790  312861 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 02:37:11.583816  312861 kic_runner.go:114] Args: [docker exec --privileged embed-certs-485234 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 02:37:11.626606  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:11.645123  312861 machine.go:94] provisionDockerMachine start ...
	I1209 02:37:11.645212  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.664460  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.664789  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.664805  312861 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:37:11.795359  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-485234
	
	I1209 02:37:11.795387  312861 ubuntu.go:182] provisioning hostname "embed-certs-485234"
	I1209 02:37:11.795448  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.814229  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.814492  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.814514  312861 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-485234 && echo "embed-certs-485234" | sudo tee /etc/hostname
	I1209 02:37:11.948171  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-485234
	
	I1209 02:37:11.948244  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:11.966144  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:11.966365  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:11.966384  312861 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-485234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-485234/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-485234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:37:12.090842  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:37:12.090872  312861 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:37:12.090923  312861 ubuntu.go:190] setting up certificates
	I1209 02:37:12.090933  312861 provision.go:84] configureAuth start
	I1209 02:37:12.090984  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.108441  312861 provision.go:143] copyHostCerts
	I1209 02:37:12.108498  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:37:12.108513  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:37:12.108581  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:37:12.108718  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:37:12.108731  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:37:12.108780  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:37:12.108915  312861 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:37:12.108926  312861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:37:12.108962  312861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:37:12.109046  312861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.embed-certs-485234 san=[127.0.0.1 192.168.94.2 embed-certs-485234 localhost minikube]
	I1209 02:37:12.185770  312861 provision.go:177] copyRemoteCerts
	I1209 02:37:12.185823  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:37:12.185867  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.203781  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.297266  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:37:12.315682  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 02:37:12.332372  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 02:37:12.348767  312861 provision.go:87] duration metric: took 257.824432ms to configureAuth
	I1209 02:37:12.348791  312861 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:37:12.348966  312861 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:12.349051  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.367892  312861 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:12.368130  312861 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1209 02:37:12.368152  312861 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:37:12.631127  312861 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:37:12.631150  312861 machine.go:97] duration metric: took 986.000884ms to provisionDockerMachine
	I1209 02:37:12.631160  312861 client.go:176] duration metric: took 6.184776828s to LocalClient.Create
	I1209 02:37:12.631178  312861 start.go:167] duration metric: took 6.184833791s to libmachine.API.Create "embed-certs-485234"
	I1209 02:37:12.631185  312861 start.go:293] postStartSetup for "embed-certs-485234" (driver="docker")
	I1209 02:37:12.631193  312861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:37:12.631247  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:37:12.631288  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.650047  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.745621  312861 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:37:12.749630  312861 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:37:12.749691  312861 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:37:12.749704  312861 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:37:12.749756  312861 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:37:12.749822  312861 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:37:12.749906  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:37:12.758040  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:12.779782  312861 start.go:296] duration metric: took 148.5859ms for postStartSetup
	I1209 02:37:12.780088  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.798780  312861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/config.json ...
	I1209 02:37:12.799048  312861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:37:12.799087  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.816209  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.906142  312861 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:37:12.910519  312861 start.go:128] duration metric: took 6.465788374s to createHost
	I1209 02:37:12.910538  312861 start.go:83] releasing machines lock for "embed-certs-485234", held for 6.465929672s
	I1209 02:37:12.910606  312861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-485234
	I1209 02:37:12.928304  312861 ssh_runner.go:195] Run: cat /version.json
	I1209 02:37:12.928356  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.928375  312861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:37:12.928447  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:12.946358  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:12.946972  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:13.091177  312861 ssh_runner.go:195] Run: systemctl --version
	I1209 02:37:13.097600  312861 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:37:13.131258  312861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:37:13.135743  312861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:37:13.135810  312861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:37:13.162689  312861 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 02:37:13.162715  312861 start.go:496] detecting cgroup driver to use...
	I1209 02:37:13.162750  312861 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:37:13.162798  312861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:37:13.178717  312861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:37:13.190805  312861 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:37:13.190853  312861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:37:13.206264  312861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:37:13.222864  312861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:37:13.305814  312861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:37:13.390556  312861 docker.go:234] disabling docker service ...
	I1209 02:37:13.390674  312861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:37:13.409495  312861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:37:13.422267  312861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:37:13.506320  312861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:37:13.589113  312861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:37:13.600697  312861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:37:13.614485  312861 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:37:13.614532  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.624541  312861 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:37:13.624587  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.633049  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.641219  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.650011  312861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:37:13.657733  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.665900  312861 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.678728  312861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:13.686933  312861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:37:13.693823  312861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:37:13.700444  312861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:13.779960  312861 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:37:13.910038  312861 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:37:13.910103  312861 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:37:13.914205  312861 start.go:564] Will wait 60s for crictl version
	I1209 02:37:13.914265  312861 ssh_runner.go:195] Run: which crictl
	I1209 02:37:13.917709  312861 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:37:13.941238  312861 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:37:13.941311  312861 ssh_runner.go:195] Run: crio --version
	I1209 02:37:13.969399  312861 ssh_runner.go:195] Run: crio --version
	I1209 02:37:13.997525  312861 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 02:37:13.444584  302799 pod_ready.go:94] pod "kube-controller-manager-no-preload-185074" is "Ready"
	I1209 02:37:13.444613  302799 pod_ready.go:86] duration metric: took 179.781521ms for pod "kube-controller-manager-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:13.644581  302799 pod_ready.go:83] waiting for pod "kube-proxy-8jh88" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.044726  302799 pod_ready.go:94] pod "kube-proxy-8jh88" is "Ready"
	I1209 02:37:14.044754  302799 pod_ready.go:86] duration metric: took 400.15086ms for pod "kube-proxy-8jh88" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.243839  302799 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.644301  302799 pod_ready.go:94] pod "kube-scheduler-no-preload-185074" is "Ready"
	I1209 02:37:14.644322  302799 pod_ready.go:86] duration metric: took 400.457904ms for pod "kube-scheduler-no-preload-185074" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:14.644333  302799 pod_ready.go:40] duration metric: took 35.907468936s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:37:14.691366  302799 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1209 02:37:14.693696  302799 out.go:179] * Done! kubectl is now configured to use "no-preload-185074" cluster and "default" namespace by default
	I1209 02:37:13.998454  312861 cli_runner.go:164] Run: docker network inspect embed-certs-485234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:14.015735  312861 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1209 02:37:14.019587  312861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:14.029452  312861 kubeadm.go:884] updating cluster {Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:37:14.029561  312861 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:14.029613  312861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:14.062629  312861 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:14.062664  312861 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:37:14.062704  312861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:14.087930  312861 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:14.087950  312861 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:37:14.087958  312861 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1209 02:37:14.088051  312861 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-485234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:37:14.088114  312861 ssh_runner.go:195] Run: crio config
	I1209 02:37:14.133509  312861 cni.go:84] Creating CNI manager for ""
	I1209 02:37:14.133535  312861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:14.133556  312861 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:37:14.133578  312861 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-485234 NodeName:embed-certs-485234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:37:14.133735  312861 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-485234"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:37:14.133794  312861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 02:37:14.141697  312861 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:37:14.141757  312861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:37:14.149416  312861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1209 02:37:14.162206  312861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:37:14.177373  312861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1209 02:37:14.189424  312861 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:37:14.192881  312861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:14.201952  312861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:14.282853  312861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:37:14.304730  312861 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234 for IP: 192.168.94.2
	I1209 02:37:14.304752  312861 certs.go:195] generating shared ca certs ...
	I1209 02:37:14.304774  312861 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.304940  312861 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:37:14.305016  312861 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:37:14.305033  312861 certs.go:257] generating profile certs ...
	I1209 02:37:14.305100  312861 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key
	I1209 02:37:14.305120  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt with IP's: []
	I1209 02:37:14.359436  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt ...
	I1209 02:37:14.359461  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.crt: {Name:mkd2687220e2c1a496f0919e5b4ee3ae985b0d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.359653  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key ...
	I1209 02:37:14.359668  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/client.key: {Name:mk9eda0520f2cbbe6316507c37cd6f28fc511268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.359822  312861 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20
	I1209 02:37:14.359847  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1209 02:37:14.444770  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 ...
	I1209 02:37:14.444793  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20: {Name:mk94bd2fac7c7e957c0ee327319c5c1e8a6301f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.444968  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20 ...
	I1209 02:37:14.444991  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20: {Name:mkacd03a1ebe1fb35635f22c6c191b2975875de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.445113  312861 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt.ad095f20 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt
	I1209 02:37:14.445190  312861 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key.ad095f20 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key
	I1209 02:37:14.445244  312861 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key
	I1209 02:37:14.445259  312861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt with IP's: []
	I1209 02:37:14.560806  312861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt ...
	I1209 02:37:14.560826  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt: {Name:mke7ad5eda062e0b1092e0004408a09aa647aeea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.560983  312861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key ...
	I1209 02:37:14.561002  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key: {Name:mk93c4daac2f0f9d1f8c2f6e132f0bae11b524ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:14.561200  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:37:14.561241  312861 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:37:14.561252  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:37:14.561274  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:37:14.561307  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:37:14.561340  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:37:14.561405  312861 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:14.561980  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:37:14.580295  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:37:14.597083  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:37:14.613685  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:37:14.630255  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 02:37:14.648077  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:37:14.666598  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:37:14.683845  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/embed-certs-485234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:37:14.701559  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:37:14.724314  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:37:14.741496  312861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:37:14.760427  312861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:37:14.773786  312861 ssh_runner.go:195] Run: openssl version
	I1209 02:37:14.779710  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.787281  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:37:14.795901  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.799927  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.799992  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:37:14.839135  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:14.847352  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145522.pem /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:14.854769  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.861800  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:37:14.869148  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.872807  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.872857  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:14.906788  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:37:14.913728  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 02:37:14.920733  312861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.928244  312861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:37:14.935526  312861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.939120  312861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.939164  312861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:37:14.983518  312861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:37:14.991697  312861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/51391683.0
	I1209 02:37:15.000864  312861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:37:15.005011  312861 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 02:37:15.005053  312861 kubeadm.go:401] StartCluster: {Name:embed-certs-485234 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-485234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:15.005116  312861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:37:15.005173  312861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:37:15.035472  312861 cri.go:89] found id: ""
	I1209 02:37:15.035518  312861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:37:15.045322  312861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 02:37:15.053145  312861 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 02:37:15.053203  312861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 02:37:15.061178  312861 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 02:37:15.061197  312861 kubeadm.go:158] found existing configuration files:
	
	I1209 02:37:15.061235  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 02:37:15.068770  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 02:37:15.068824  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 02:37:15.075842  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 02:37:15.083627  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 02:37:15.083711  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 02:37:15.091022  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 02:37:15.098306  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 02:37:15.098366  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 02:37:15.105103  312861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 02:37:15.112368  312861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 02:37:15.112418  312861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 02:37:15.119369  312861 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 02:37:15.155406  312861 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 02:37:15.155454  312861 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 02:37:15.189920  312861 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 02:37:15.190010  312861 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 02:37:15.190083  312861 kubeadm.go:319] OS: Linux
	I1209 02:37:15.190144  312861 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 02:37:15.190210  312861 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 02:37:15.190296  312861 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 02:37:15.190379  312861 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 02:37:15.190454  312861 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 02:37:15.190527  312861 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 02:37:15.190604  312861 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 02:37:15.190702  312861 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 02:37:15.249252  312861 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 02:37:15.249405  312861 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 02:37:15.249583  312861 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 02:37:15.256114  312861 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 02:37:15.259205  312861 out.go:252]   - Generating certificates and keys ...
	I1209 02:37:15.259301  312861 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 02:37:15.259380  312861 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 02:37:15.555393  312861 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 02:37:15.791444  312861 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 02:37:16.204198  312861 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 02:37:16.347360  312861 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 02:37:16.874857  312861 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 02:37:16.875048  312861 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-485234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1209 02:37:17.314689  312861 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 02:37:17.314865  312861 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-485234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1209 02:37:17.499551  312861 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 02:37:17.696286  312861 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 02:37:17.984705  312861 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 02:37:17.984811  312861 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 02:37:18.173479  312861 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 02:37:18.852948  312861 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 02:37:19.295701  312861 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 02:37:19.424695  312861 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 02:37:19.612418  312861 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 02:37:19.613112  312861 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 02:37:19.616719  312861 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 02:37:19.618181  312861 out.go:252]   - Booting up control plane ...
	I1209 02:37:19.618275  312861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 02:37:19.618393  312861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 02:37:19.619018  312861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 02:37:19.649026  312861 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 02:37:19.649149  312861 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 02:37:19.657257  312861 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 02:37:19.657507  312861 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 02:37:19.657567  312861 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 02:37:19.759620  312861 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 02:37:19.759784  312861 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 02:37:20.761316  312861 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001746054s
	I1209 02:37:20.765776  312861 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 02:37:20.765912  312861 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1209 02:37:20.766025  312861 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 02:37:20.766123  312861 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 02:37:22.819775  312861 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.053221308s
	I1209 02:37:23.152996  312861 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.387202858s
	I1209 02:37:24.767584  312861 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00188914s
	I1209 02:37:24.786162  312861 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 02:37:24.798304  312861 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 02:37:24.807476  312861 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 02:37:24.807790  312861 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-485234 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 02:37:24.816734  312861 kubeadm.go:319] [bootstrap-token] Using token: ty9ko3.el3azfnom318a2bn
	
	
	==> CRI-O <==
	Dec 09 02:36:53 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:36:53.828789902Z" level=info msg="Started container" PID=1765 containerID=526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper id=0f72fc4f-a8b2-4395-8a49-1086eb16ef3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d181da0d644d527c0a4e8fb28e50439b85141bf50de780e61363b086a1998e8
	Dec 09 02:36:54 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:36:54.41013527Z" level=info msg="Removing container: 8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11" id=1ecf41c2-9cd3-4ceb-9b48-c3a53d37dccf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:36:54 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:36:54.42003985Z" level=info msg="Removed container 8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper" id=1ecf41c2-9cd3-4ceb-9b48-c3a53d37dccf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.43436926Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=65a4c159-aeb1-4e30-9475-f2cb736d759b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.435392982Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=19d91470-4b32-4578-9ded-b68cb9279b67 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.436531798Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4a2dbb33-dbf4-41db-b803-f5df96266045 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.436705922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.442180098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.44238818Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cf0809ab71c619b2c152fe52fda4ff2a8c55131c68e5d1a053cf146db5923931/merged/etc/passwd: no such file or directory"
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.442425197Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cf0809ab71c619b2c152fe52fda4ff2a8c55131c68e5d1a053cf146db5923931/merged/etc/group: no such file or directory"
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.442839977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.468939083Z" level=info msg="Created container 4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26: kube-system/storage-provisioner/storage-provisioner" id=4a2dbb33-dbf4-41db-b803-f5df96266045 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.469516634Z" level=info msg="Starting container: 4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26" id=eb3e0514-a067-4b34-91b8-fc90ccd99493 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:03 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:03.471504174Z" level=info msg="Started container" PID=1783 containerID=4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26 description=kube-system/storage-provisioner/storage-provisioner id=eb3e0514-a067-4b34-91b8-fc90ccd99493 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b784636ec02d0254db0cfdf9b1a6cdfa54b38a43dfa8d91e75da6bca85d4c34
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.275376933Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67de4005-e9ad-4c46-9ff1-1e4892f34039 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.276405066Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d1aaa5f4-950b-4e8d-a46d-45e9fddce66b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.277566996Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper" id=b7478a17-a13b-4979-989d-310f98c402df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.277755781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.284449887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.284961072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.313840449Z" level=info msg="Created container 3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper" id=b7478a17-a13b-4979-989d-310f98c402df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.314474171Z" level=info msg="Starting container: 3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8" id=99dea247-056b-4522-a398-8caaed4565e1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.316603802Z" level=info msg="Started container" PID=1818 containerID=3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper id=99dea247-056b-4522-a398-8caaed4565e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d181da0d644d527c0a4e8fb28e50439b85141bf50de780e61363b086a1998e8
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.478546759Z" level=info msg="Removing container: 526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88" id=28f0ae3c-e0a0-44c4-8b83-f7523f13e188 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:18 default-k8s-diff-port-512414 crio[569]: time="2025-12-09T02:37:18.488121351Z" level=info msg="Removed container 526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg/dashboard-metrics-scraper" id=28f0ae3c-e0a0-44c4-8b83-f7523f13e188 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	3a149228f14b9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   1d181da0d644d       dashboard-metrics-scraper-6ffb444bf9-5kpdg             kubernetes-dashboard
	4f758b488db40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   2b784636ec02d       storage-provisioner                                    kube-system
	e54085e8d5133       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   39df424f96013       kubernetes-dashboard-855c9754f9-ktttw                  kubernetes-dashboard
	5bcc90f3b2b85       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   771d0965fab59       coredns-66bc5c9577-gtkkc                               kube-system
	402fd6ba3937b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   882ad7154fc22       busybox                                                default
	b44fa08e1c948       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           53 seconds ago      Running             kube-proxy                  0                   76108ede713f0       kube-proxy-nkdhm                                       kube-system
	048f1c30da0ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   2b784636ec02d       storage-provisioner                                    kube-system
	71d839a5d0175       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   0e5769c41efea       kindnet-5hz5b                                          kube-system
	5e7dc88fe52e6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   1ca2937fceae0       etcd-default-k8s-diff-port-512414                      kube-system
	53e2ef1a8035d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           56 seconds ago      Running             kube-controller-manager     0                   6f9eed3928fd7       kube-controller-manager-default-k8s-diff-port-512414   kube-system
	08b84802df75f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           56 seconds ago      Running             kube-apiserver              0                   9ff98d0a11965       kube-apiserver-default-k8s-diff-port-512414            kube-system
	59648f3bd410e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           56 seconds ago      Running             kube-scheduler              0                   28acf9be160c4       kube-scheduler-default-k8s-diff-port-512414            kube-system
	
	
	==> coredns [5bcc90f3b2b85b5a813e4a6297bed0ba94510f88322bbc811d37c3b31e147ed6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51245 - 65356 "HINFO IN 3221666007643804166.6251389351970285842. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.08486378s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-512414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-512414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=default-k8s-diff-port-512414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_35_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:35:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-512414
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:37:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:37:12 +0000   Tue, 09 Dec 2025 02:35:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:37:12 +0000   Tue, 09 Dec 2025 02:35:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:37:12 +0000   Tue, 09 Dec 2025 02:35:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:37:12 +0000   Tue, 09 Dec 2025 02:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-512414
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                73837a98-9d7d-40ab-bb93-0a67d7e98624
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-gtkkc                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-512414                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-5hz5b                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-512414             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-512414    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-nkdhm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-512414             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-5kpdg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ktttw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node default-k8s-diff-port-512414 event: Registered Node default-k8s-diff-port-512414 in Controller
	  Normal  NodeReady                95s                kubelet          Node default-k8s-diff-port-512414 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-512414 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node default-k8s-diff-port-512414 event: Registered Node default-k8s-diff-port-512414 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [5e7dc88fe52e694684d7007065cba87c04d380ba1290283d9662ad6f91aaafe2] <==
	{"level":"warn","ts":"2025-12-09T02:36:31.114426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.123312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.135304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.143359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.150273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.164649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.172374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.183722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.192779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.200791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.207249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.214893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.223512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.230181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.237488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.244883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.252097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.260067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.267903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.274620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.281844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.300455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.307378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.314253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:31.371805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38028","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:37:26 up  1:19,  0 user,  load average: 3.48, 2.67, 1.91
	Linux default-k8s-diff-port-512414 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [71d839a5d0175f6d17d7d3f55496772732092bcda33bd8ed81aa933ec7279dfa] <==
	I1209 02:36:32.865912       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:36:32.866164       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1209 02:36:32.866322       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:36:32.866343       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:36:32.866371       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:36:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:36:33.070443       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:36:33.070473       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:36:33.070484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:36:33.071785       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:36:33.462051       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:36:33.462328       1 metrics.go:72] Registering metrics
	I1209 02:36:33.462396       1 controller.go:711] "Syncing nftables rules"
	I1209 02:36:43.070934       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:36:43.071009       1 main.go:301] handling current node
	I1209 02:36:53.075721       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:36:53.075783       1 main.go:301] handling current node
	I1209 02:37:03.070252       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:37:03.070298       1 main.go:301] handling current node
	I1209 02:37:13.070609       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:37:13.070675       1 main.go:301] handling current node
	I1209 02:37:23.073739       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 02:37:23.074368       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08b84802df75faab1ac51f0d9397731ef50a3cf06d6bc33889322842ab9894e6] <==
	I1209 02:36:31.928248       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 02:36:31.928268       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 02:36:31.928889       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1209 02:36:31.929026       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1209 02:36:31.929565       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 02:36:31.929834       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1209 02:36:31.929895       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1209 02:36:31.932684       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1209 02:36:31.941792       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1209 02:36:31.942546       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 02:36:31.947176       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 02:36:31.970937       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:36:31.973820       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 02:36:31.974003       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 02:36:32.280944       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:36:32.329283       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:36:32.358983       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:36:32.379726       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:36:32.392811       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:36:32.456724       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.22.56"}
	I1209 02:36:32.474023       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.242.15"}
	I1209 02:36:32.833833       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:36:35.617312       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:36:35.666854       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:36:35.717522       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [53e2ef1a8035d284e5ca2d86b22685fdbc319dbfa71b2b00d3a4fda9676fdacd] <==
	I1209 02:36:35.245457       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1209 02:36:35.245487       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1209 02:36:35.245497       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1209 02:36:35.245504       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 02:36:35.247664       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 02:36:35.251970       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 02:36:35.263391       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1209 02:36:35.263415       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 02:36:35.263466       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 02:36:35.263490       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 02:36:35.263493       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:36:35.263491       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1209 02:36:35.263572       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 02:36:35.263598       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 02:36:35.263838       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1209 02:36:35.264092       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1209 02:36:35.265984       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 02:36:35.269219       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:36:35.272481       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:36:35.280656       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 02:36:35.283981       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:36:35.285063       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:36:35.285077       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1209 02:36:35.285102       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1209 02:36:35.287239       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [b44fa08e1c948c8a2e74282b096d0d0f88dbea82e76db849be56ed398f3fe183] <==
	I1209 02:36:32.656438       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:36:32.714975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:36:32.816109       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:36:32.816148       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1209 02:36:32.816237       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:36:32.838423       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:36:32.838487       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:36:32.844976       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:36:32.845424       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:36:32.845510       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:32.847926       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:36:32.848006       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:36:32.847840       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:36:32.848727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:36:32.849118       1 config.go:200] "Starting service config controller"
	I1209 02:36:32.849135       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:36:32.849463       1 config.go:309] "Starting node config controller"
	I1209 02:36:32.849517       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:36:32.849526       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:36:32.948498       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:36:32.950257       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:36:32.951488       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [59648f3bd410e19a0b3346422e261893be00390058d6e433840a3d0576f9f237] <==
	I1209 02:36:30.753312       1 serving.go:386] Generated self-signed cert in-memory
	I1209 02:36:32.458882       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 02:36:32.458908       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:32.466454       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1209 02:36:32.466541       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1209 02:36:32.466605       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:32.466716       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:32.466621       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1209 02:36:32.466798       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1209 02:36:32.466889       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:36:32.467674       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:36:32.567520       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1209 02:36:32.567543       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:32.567734       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 09 02:36:35 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:35.998222     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a2d8f564-44f9-4bad-8be1-7ea025ad2cf4-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ktttw\" (UID: \"a2d8f564-44f9-4bad-8be1-7ea025ad2cf4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ktttw"
	Dec 09 02:36:35 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:35.998249     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr7k8\" (UniqueName: \"kubernetes.io/projected/07a7e4be-d8c0-44e3-8c59-654c1a33b3c3-kube-api-access-nr7k8\") pod \"dashboard-metrics-scraper-6ffb444bf9-5kpdg\" (UID: \"07a7e4be-d8c0-44e3-8c59-654c1a33b3c3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg"
	Dec 09 02:36:37 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:37.561949     725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 09 02:36:42 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:42.726029     725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ktttw" podStartSLOduration=4.015708514 podStartE2EDuration="7.726006374s" podCreationTimestamp="2025-12-09 02:36:35 +0000 UTC" firstStartedPulling="2025-12-09 02:36:36.20424153 +0000 UTC m=+7.100038336" lastFinishedPulling="2025-12-09 02:36:39.914539379 +0000 UTC m=+10.810336196" observedRunningTime="2025-12-09 02:36:40.373171984 +0000 UTC m=+11.268968809" watchObservedRunningTime="2025-12-09 02:36:42.726006374 +0000 UTC m=+13.621803199"
	Dec 09 02:36:43 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:43.373179     725 scope.go:117] "RemoveContainer" containerID="77b373dcc10b3a8129df23b7ab4d13733cfb558de33889c1fe5ab5c1b0e540bc"
	Dec 09 02:36:44 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:44.377830     725 scope.go:117] "RemoveContainer" containerID="77b373dcc10b3a8129df23b7ab4d13733cfb558de33889c1fe5ab5c1b0e540bc"
	Dec 09 02:36:44 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:44.378024     725 scope.go:117] "RemoveContainer" containerID="8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11"
	Dec 09 02:36:44 default-k8s-diff-port-512414 kubelet[725]: E1209 02:36:44.378227     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5kpdg_kubernetes-dashboard(07a7e4be-d8c0-44e3-8c59-654c1a33b3c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg" podUID="07a7e4be-d8c0-44e3-8c59-654c1a33b3c3"
	Dec 09 02:36:45 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:45.384664     725 scope.go:117] "RemoveContainer" containerID="8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11"
	Dec 09 02:36:45 default-k8s-diff-port-512414 kubelet[725]: E1209 02:36:45.384896     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5kpdg_kubernetes-dashboard(07a7e4be-d8c0-44e3-8c59-654c1a33b3c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg" podUID="07a7e4be-d8c0-44e3-8c59-654c1a33b3c3"
	Dec 09 02:36:53 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:53.787431     725 scope.go:117] "RemoveContainer" containerID="8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11"
	Dec 09 02:36:54 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:54.408724     725 scope.go:117] "RemoveContainer" containerID="8fac0631c137c6c49d18c0c6ebb0dd331873e83b78c0739c93dcd100459cff11"
	Dec 09 02:36:54 default-k8s-diff-port-512414 kubelet[725]: I1209 02:36:54.408976     725 scope.go:117] "RemoveContainer" containerID="526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88"
	Dec 09 02:36:54 default-k8s-diff-port-512414 kubelet[725]: E1209 02:36:54.409203     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5kpdg_kubernetes-dashboard(07a7e4be-d8c0-44e3-8c59-654c1a33b3c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg" podUID="07a7e4be-d8c0-44e3-8c59-654c1a33b3c3"
	Dec 09 02:37:03 default-k8s-diff-port-512414 kubelet[725]: I1209 02:37:03.434014     725 scope.go:117] "RemoveContainer" containerID="048f1c30da0ecec62a1fcba7f690097c9e30ead84da2485e05d76879313b176f"
	Dec 09 02:37:03 default-k8s-diff-port-512414 kubelet[725]: I1209 02:37:03.787189     725 scope.go:117] "RemoveContainer" containerID="526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88"
	Dec 09 02:37:03 default-k8s-diff-port-512414 kubelet[725]: E1209 02:37:03.787352     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5kpdg_kubernetes-dashboard(07a7e4be-d8c0-44e3-8c59-654c1a33b3c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg" podUID="07a7e4be-d8c0-44e3-8c59-654c1a33b3c3"
	Dec 09 02:37:18 default-k8s-diff-port-512414 kubelet[725]: I1209 02:37:18.274846     725 scope.go:117] "RemoveContainer" containerID="526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88"
	Dec 09 02:37:18 default-k8s-diff-port-512414 kubelet[725]: I1209 02:37:18.477350     725 scope.go:117] "RemoveContainer" containerID="526aa2f8db7becf067f06cd75ba32490fef0c9dbb08cfe4b23497ef7a3320f88"
	Dec 09 02:37:18 default-k8s-diff-port-512414 kubelet[725]: I1209 02:37:18.477590     725 scope.go:117] "RemoveContainer" containerID="3a149228f14b9bc91e9490c507ac6ec01b5cdf0332e388dcffa22a3d679d12c8"
	Dec 09 02:37:18 default-k8s-diff-port-512414 kubelet[725]: E1209 02:37:18.477811     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5kpdg_kubernetes-dashboard(07a7e4be-d8c0-44e3-8c59-654c1a33b3c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5kpdg" podUID="07a7e4be-d8c0-44e3-8c59-654c1a33b3c3"
	Dec 09 02:37:21 default-k8s-diff-port-512414 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:37:21 default-k8s-diff-port-512414 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:37:21 default-k8s-diff-port-512414 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 02:37:21 default-k8s-diff-port-512414 systemd[1]: kubelet.service: Consumed 1.721s CPU time.
	
	
	==> kubernetes-dashboard [e54085e8d51335921b3d7fe0b9a1d7d90a704d7634df52d9f90ba12ae61894cb] <==
	2025/12/09 02:36:39 Starting overwatch
	2025/12/09 02:36:39 Using namespace: kubernetes-dashboard
	2025/12/09 02:36:39 Using in-cluster config to connect to apiserver
	2025/12/09 02:36:39 Using secret token for csrf signing
	2025/12/09 02:36:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/09 02:36:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/09 02:36:39 Successful initial request to the apiserver, version: v1.34.2
	2025/12/09 02:36:39 Generating JWE encryption key
	2025/12/09 02:36:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/09 02:36:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/09 02:36:40 Initializing JWE encryption key from synchronized object
	2025/12/09 02:36:40 Creating in-cluster Sidecar client
	2025/12/09 02:36:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:36:40 Serving insecurely on HTTP port: 9090
	2025/12/09 02:37:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [048f1c30da0ecec62a1fcba7f690097c9e30ead84da2485e05d76879313b176f] <==
	I1209 02:36:32.617701       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:37:02.621421       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [4f758b488db4028cffb975b17003d2a2b2bb4353943d1193d53e37bb0c3b6a26] <==
	I1209 02:37:03.484302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:37:03.492401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:37:03.492440       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:37:03.494500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:06.950259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:11.210550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:14.809513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:17.862733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:20.885518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:20.890484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:37:20.890758       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:37:20.891084       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-512414_cbd2fb55-8b0a-4135-9ec8-68a93c594802!
	I1209 02:37:20.891101       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2818b0d6-e891-4733-8290-62f4a6a50242", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-512414_cbd2fb55-8b0a-4135-9ec8-68a93c594802 became leader
	W1209 02:37:20.893623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:20.897472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:37:20.994313       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-512414_cbd2fb55-8b0a-4135-9ec8-68a93c594802!
	W1209 02:37:22.901689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:22.906227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:24.909609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:24.914391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414: exit status 2 (433.154796ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-512414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-185074 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-185074 --alsologtostderr -v=1: exit status 80 (1.799918869s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-185074 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:37:26.477240  319158 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:37:26.477899  319158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:26.477913  319158 out.go:374] Setting ErrFile to fd 2...
	I1209 02:37:26.477921  319158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:26.478410  319158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:37:26.479527  319158 out.go:368] Setting JSON to false
	I1209 02:37:26.479548  319158 mustload.go:66] Loading cluster: no-preload-185074
	I1209 02:37:26.480204  319158 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:37:26.480805  319158 cli_runner.go:164] Run: docker container inspect no-preload-185074 --format={{.State.Status}}
	I1209 02:37:26.511200  319158 host.go:66] Checking if "no-preload-185074" exists ...
	I1209 02:37:26.511519  319158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:26.607897  319158 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-09 02:37:26.593209117 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:26.608875  319158 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-185074 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1209 02:37:26.610869  319158 out.go:179] * Pausing node no-preload-185074 ... 
	I1209 02:37:26.612110  319158 host.go:66] Checking if "no-preload-185074" exists ...
	I1209 02:37:26.612486  319158 ssh_runner.go:195] Run: systemctl --version
	I1209 02:37:26.612539  319158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-185074
	I1209 02:37:26.635055  319158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/no-preload-185074/id_rsa Username:docker}
	I1209 02:37:26.732808  319158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:26.747866  319158 pause.go:52] kubelet running: true
	I1209 02:37:26.747960  319158 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:37:26.951973  319158 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:37:26.952066  319158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:27.028111  319158 cri.go:89] found id: "b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62"
	I1209 02:37:27.028138  319158 cri.go:89] found id: "44c225d6c3091f7459da4f454564761e1a7750a4d458862d0a09b0cddffed80e"
	I1209 02:37:27.028144  319158 cri.go:89] found id: "bf55bc97a247e88c685148e5cfafdc9f5a78f00ec8bc92045e9dccdb1872de23"
	I1209 02:37:27.028149  319158 cri.go:89] found id: "494db3b633a30f19d73a1257ed84c13a29e5bb941ce120fef01b27f9820ee9e9"
	I1209 02:37:27.028154  319158 cri.go:89] found id: "5c6f48c9b6416c452f59edf0b90df1147d668c339e70df3ae54c128418ffbbff"
	I1209 02:37:27.028159  319158 cri.go:89] found id: "c5c4ce96abc06f5a5b23aafe5daf5879d64acdb88e8c6ffd8f7cf7c1ada39c1c"
	I1209 02:37:27.028163  319158 cri.go:89] found id: "9327d2d4d2c27fea6986f3c244048b51916d2021ddd3fdcb8b7969c3248eb12d"
	I1209 02:37:27.028167  319158 cri.go:89] found id: "d5c8daf1abc24fba86bc53274918db2a9e734b6cddd581f0b30523f24811caab"
	I1209 02:37:27.028170  319158 cri.go:89] found id: "0350088df68730310fdcf473d3556c4668d047069dccaa944bea1003c044ae64"
	I1209 02:37:27.028175  319158 cri.go:89] found id: "31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091"
	I1209 02:37:27.028187  319158 cri.go:89] found id: "5777f937bf741ba1dc62499f12167b6495deeedeca30041791c6f42d06337b5b"
	I1209 02:37:27.028197  319158 cri.go:89] found id: ""
	I1209 02:37:27.028270  319158 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:27.048571  319158 retry.go:31] will retry after 131.202882ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:37:27.180008  319158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:27.195052  319158 pause.go:52] kubelet running: false
	I1209 02:37:27.195104  319158 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:37:27.370957  319158 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:37:27.371050  319158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:27.460532  319158 cri.go:89] found id: "b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62"
	I1209 02:37:27.461007  319158 cri.go:89] found id: "44c225d6c3091f7459da4f454564761e1a7750a4d458862d0a09b0cddffed80e"
	I1209 02:37:27.461177  319158 cri.go:89] found id: "bf55bc97a247e88c685148e5cfafdc9f5a78f00ec8bc92045e9dccdb1872de23"
	I1209 02:37:27.461188  319158 cri.go:89] found id: "494db3b633a30f19d73a1257ed84c13a29e5bb941ce120fef01b27f9820ee9e9"
	I1209 02:37:27.461193  319158 cri.go:89] found id: "5c6f48c9b6416c452f59edf0b90df1147d668c339e70df3ae54c128418ffbbff"
	I1209 02:37:27.461200  319158 cri.go:89] found id: "c5c4ce96abc06f5a5b23aafe5daf5879d64acdb88e8c6ffd8f7cf7c1ada39c1c"
	I1209 02:37:27.461237  319158 cri.go:89] found id: "9327d2d4d2c27fea6986f3c244048b51916d2021ddd3fdcb8b7969c3248eb12d"
	I1209 02:37:27.461241  319158 cri.go:89] found id: "d5c8daf1abc24fba86bc53274918db2a9e734b6cddd581f0b30523f24811caab"
	I1209 02:37:27.461246  319158 cri.go:89] found id: "0350088df68730310fdcf473d3556c4668d047069dccaa944bea1003c044ae64"
	I1209 02:37:27.461255  319158 cri.go:89] found id: "31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091"
	I1209 02:37:27.461268  319158 cri.go:89] found id: "5777f937bf741ba1dc62499f12167b6495deeedeca30041791c6f42d06337b5b"
	I1209 02:37:27.461272  319158 cri.go:89] found id: ""
	I1209 02:37:27.461339  319158 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:27.479041  319158 retry.go:31] will retry after 428.241884ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:37:27.907690  319158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:27.921249  319158 pause.go:52] kubelet running: false
	I1209 02:37:27.921313  319158 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:37:28.094251  319158 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:37:28.094340  319158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:37:28.170524  319158 cri.go:89] found id: "b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62"
	I1209 02:37:28.170554  319158 cri.go:89] found id: "44c225d6c3091f7459da4f454564761e1a7750a4d458862d0a09b0cddffed80e"
	I1209 02:37:28.170560  319158 cri.go:89] found id: "bf55bc97a247e88c685148e5cfafdc9f5a78f00ec8bc92045e9dccdb1872de23"
	I1209 02:37:28.170565  319158 cri.go:89] found id: "494db3b633a30f19d73a1257ed84c13a29e5bb941ce120fef01b27f9820ee9e9"
	I1209 02:37:28.170602  319158 cri.go:89] found id: "5c6f48c9b6416c452f59edf0b90df1147d668c339e70df3ae54c128418ffbbff"
	I1209 02:37:28.170617  319158 cri.go:89] found id: "c5c4ce96abc06f5a5b23aafe5daf5879d64acdb88e8c6ffd8f7cf7c1ada39c1c"
	I1209 02:37:28.170625  319158 cri.go:89] found id: "9327d2d4d2c27fea6986f3c244048b51916d2021ddd3fdcb8b7969c3248eb12d"
	I1209 02:37:28.170630  319158 cri.go:89] found id: "d5c8daf1abc24fba86bc53274918db2a9e734b6cddd581f0b30523f24811caab"
	I1209 02:37:28.170651  319158 cri.go:89] found id: "0350088df68730310fdcf473d3556c4668d047069dccaa944bea1003c044ae64"
	I1209 02:37:28.170662  319158 cri.go:89] found id: "31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091"
	I1209 02:37:28.170671  319158 cri.go:89] found id: "5777f937bf741ba1dc62499f12167b6495deeedeca30041791c6f42d06337b5b"
	I1209 02:37:28.170675  319158 cri.go:89] found id: ""
	I1209 02:37:28.170722  319158 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:37:28.187087  319158 out.go:203] 
	W1209 02:37:28.188304  319158 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 02:37:28.188325  319158 out.go:285] * 
	* 
	W1209 02:37:28.192303  319158 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 02:37:28.193767  319158 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-185074 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-185074
helpers_test.go:243: (dbg) docker inspect no-preload-185074:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75",
	        "Created": "2025-12-09T02:35:10.661104017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:36:28.677719107Z",
	            "FinishedAt": "2025-12-09T02:36:27.569456145Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/hostname",
	        "HostsPath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/hosts",
	        "LogPath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75-json.log",
	        "Name": "/no-preload-185074",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-185074:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-185074",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75",
	                "LowerDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-185074",
	                "Source": "/var/lib/docker/volumes/no-preload-185074/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-185074",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-185074",
	                "name.minikube.sigs.k8s.io": "no-preload-185074",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c7af02eb60e8c90e83c2da8b5126338548ac6b7195fa71d4350b2fc28b5e611",
	            "SandboxKey": "/var/run/docker/netns/5c7af02eb60e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-185074": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d42fa8488d6e111dc575a4746973e4e3d2a7c9b8452ce6de734cd48ffe8b1bf7",
	                    "EndpointID": "0cdd9a2ed8c8ac8f127882c43dccab7cfd26b8c037eee8bb0cabbe602c13583e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "42:7e:8f:57:ac:ac",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-185074",
	                        "4597603e9b7f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185074 -n no-preload-185074
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185074 -n no-preload-185074: exit status 2 (360.910308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-185074 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-185074 logs -n 25: (2.700885465s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-512414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p no-preload-185074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ stop    │ -p newest-cni-828614 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-828614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ image   │ newest-cni-828614 image list --format=json                                                                                                                                                                                                           │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ pause   │ -p newest-cni-828614 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p disable-driver-mounts-894253                                                                                                                                                                                                                      │ disable-driver-mounts-894253 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-485234           │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ old-k8s-version-126117 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p old-k8s-version-126117 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ default-k8s-diff-port-512414 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p default-k8s-diff-port-512414 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ delete  │ -p old-k8s-version-126117                                                                                                                                                                                                                            │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p old-k8s-version-126117                                                                                                                                                                                                                            │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ image   │ no-preload-185074 image list --format=json                                                                                                                                                                                                           │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p auto-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-933067                  │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ pause   │ -p no-preload-185074 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-512414                                                                                                                                                                                                                      │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:37:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:37:26.293783  319017 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:37:26.294004  319017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:26.294013  319017 out.go:374] Setting ErrFile to fd 2...
	I1209 02:37:26.294017  319017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:26.294206  319017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:37:26.294650  319017 out.go:368] Setting JSON to false
	I1209 02:37:26.295922  319017 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4795,"bootTime":1765243051,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:37:26.296007  319017 start.go:143] virtualization: kvm guest
	I1209 02:37:26.298044  319017 out.go:179] * [auto-933067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:37:26.299368  319017 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:37:26.299359  319017 notify.go:221] Checking for updates...
	I1209 02:37:26.300854  319017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:37:26.302711  319017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:37:26.306851  319017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:37:26.309967  319017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:37:26.311562  319017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:37:26.313254  319017 config.go:182] Loaded profile config "default-k8s-diff-port-512414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:26.313401  319017 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:26.313519  319017 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:37:26.313658  319017 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:37:26.347431  319017 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:37:26.347599  319017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:26.432129  319017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-09 02:37:26.418722469 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:26.432257  319017 docker.go:319] overlay module found
	I1209 02:37:26.433978  319017 out.go:179] * Using the docker driver based on user configuration
	I1209 02:37:26.435164  319017 start.go:309] selected driver: docker
	I1209 02:37:26.435182  319017 start.go:927] validating driver "docker" against <nil>
	I1209 02:37:26.435213  319017 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:37:26.435986  319017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:26.516309  319017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-09 02:37:26.503075052 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:26.516538  319017 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:37:26.516846  319017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:37:26.518428  319017 out.go:179] * Using Docker driver with root privileges
	I1209 02:37:26.519628  319017 cni.go:84] Creating CNI manager for ""
	I1209 02:37:26.519785  319017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:26.519796  319017 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 02:37:26.519879  319017 start.go:353] cluster config:
	{Name:auto-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1209 02:37:26.521137  319017 out.go:179] * Starting "auto-933067" primary control-plane node in "auto-933067" cluster
	I1209 02:37:26.522276  319017 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:37:26.523817  319017 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:37:26.524859  319017 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:26.524896  319017 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:37:26.524908  319017 cache.go:65] Caching tarball of preloaded images
	I1209 02:37:26.525013  319017 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:37:26.525024  319017 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:37:26.525050  319017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:37:26.525146  319017 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/config.json ...
	I1209 02:37:26.525170  319017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/config.json: {Name:mk29fa4c3200084b6c0630be845930c23edecba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:26.574558  319017 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:37:26.574587  319017 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:37:26.574606  319017 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:37:26.574669  319017 start.go:360] acquireMachinesLock for auto-933067: {Name:mkd1bc5f16440871c0ae9380bbd77b0ebf91ff14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:37:26.574790  319017 start.go:364] duration metric: took 95.425µs to acquireMachinesLock for "auto-933067"
	I1209 02:37:26.574821  319017 start.go:93] Provisioning new machine with config: &{Name:auto-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-933067 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:37:26.574924  319017 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 09 02:37:02 no-preload-185074 crio[566]: time="2025-12-09T02:37:02.073286571Z" level=info msg="Started container" PID=1753 containerID=8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper id=bc623516-aacf-4906-878f-9377f5cb4e91 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e2cdaa80919e11fe81c5a9b9db6fc9806032322eaed3b362bc9750b6243253c
	Dec 09 02:37:02 no-preload-185074 crio[566]: time="2025-12-09T02:37:02.12549655Z" level=info msg="Removing container: aa3c7629b0d5509ec5dd3cd5f571b19ca503873b991743721998518767a69de9" id=3988f03a-7112-4cc1-af77-6a9b49f15c87 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:02 no-preload-185074 crio[566]: time="2025-12-09T02:37:02.13520842Z" level=info msg="Removed container aa3c7629b0d5509ec5dd3cd5f571b19ca503873b991743721998518767a69de9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper" id=3988f03a-7112-4cc1-af77-6a9b49f15c87 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.144873076Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=554b6d0a-a15d-4a4c-af53-4d5557ce8148 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.145714341Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c1ffe817-bf87-44f6-941b-a63c583b01f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.151625291Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ec765275-44c8-4e81-a3ed-c4cb56a3a5e1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.151791613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.234003838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.234221648Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/03aa2fd056e712dbbe40c5b18c12c73793df5b996ee9f54d4a491a298b6ae609/merged/etc/passwd: no such file or directory"
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.234249899Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/03aa2fd056e712dbbe40c5b18c12c73793df5b996ee9f54d4a491a298b6ae609/merged/etc/group: no such file or directory"
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.234567018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.317545117Z" level=info msg="Created container b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62: kube-system/storage-provisioner/storage-provisioner" id=ec765275-44c8-4e81-a3ed-c4cb56a3a5e1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.318359252Z" level=info msg="Starting container: b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62" id=d959f225-de92-498b-a4b5-a0f87d664cb6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.320712695Z" level=info msg="Started container" PID=1767 containerID=b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62 description=kube-system/storage-provisioner/storage-provisioner id=d959f225-de92-498b-a4b5-a0f87d664cb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2b1eada05ea7a64c76f8127d3a6cc44a8203770a338d75d47a66b37daf9ffae
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.025752236Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fc3b72a0-b3c0-4119-8b24-896894793732 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.026682947Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=14afd3e2-22fe-424d-aec1-e6d365efa92f name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.027777012Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper" id=7e7ea59e-494b-4444-bcfd-ad706cd5a82e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.027920766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.034981301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.035628402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.068871817Z" level=info msg="Created container 31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper" id=7e7ea59e-494b-4444-bcfd-ad706cd5a82e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.0708641Z" level=info msg="Starting container: 31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091" id=9fc6905d-4366-4b08-80c2-db72de395326 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.0737067Z" level=info msg="Started container" PID=1805 containerID=31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper id=9fc6905d-4366-4b08-80c2-db72de395326 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e2cdaa80919e11fe81c5a9b9db6fc9806032322eaed3b362bc9750b6243253c
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.1963657Z" level=info msg="Removing container: 8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27" id=1e5fa587-5306-4972-9c85-c3d6469c055e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.206694823Z" level=info msg="Removed container 8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper" id=1e5fa587-5306-4972-9c85-c3d6469c055e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	31e834b685315       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   8e2cdaa80919e       dashboard-metrics-scraper-867fb5f87b-wcj5m   kubernetes-dashboard
	b9e869a3748a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   c2b1eada05ea7       storage-provisioner                          kube-system
	5777f937bf741       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   25a0e48c8e076       kubernetes-dashboard-b84665fb8-kvvqg         kubernetes-dashboard
	a58ef5a8f6714       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   4154567bd2799       busybox                                      default
	44c225d6c3091       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           51 seconds ago      Running             coredns                     0                   b46d3bee9d461       coredns-7d764666f9-m6tbs                     kube-system
	bf55bc97a247e       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           51 seconds ago      Running             kube-proxy                  0                   f6007ecb55102       kube-proxy-8jh88                             kube-system
	494db3b633a30       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   c2b1eada05ea7       storage-provisioner                          kube-system
	5c6f48c9b6416       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   9c7f257169cd5       kindnet-pflxj                                kube-system
	c5c4ce96abc06       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           54 seconds ago      Running             kube-controller-manager     0                   2456b18a023f2       kube-controller-manager-no-preload-185074    kube-system
	9327d2d4d2c27       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   f8f093ec36a9f       etcd-no-preload-185074                       kube-system
	d5c8daf1abc24       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           54 seconds ago      Running             kube-apiserver              0                   a0ed431046c9f       kube-apiserver-no-preload-185074             kube-system
	0350088df6873       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           54 seconds ago      Running             kube-scheduler              0                   09c86933674cd       kube-scheduler-no-preload-185074             kube-system
	
	
	==> coredns [44c225d6c3091f7459da4f454564761e1a7750a4d458862d0a09b0cddffed80e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37001 - 50335 "HINFO IN 3261395760700908320.5592896465674935216. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.105687385s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-185074
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-185074
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=no-preload-185074
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_35_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:35:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-185074
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:37:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:37:07 +0000   Tue, 09 Dec 2025 02:35:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:37:07 +0000   Tue, 09 Dec 2025 02:35:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:37:07 +0000   Tue, 09 Dec 2025 02:35:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:37:07 +0000   Tue, 09 Dec 2025 02:35:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-185074
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                bea297a5-f68c-4ca1-862a-f85a9f2be474
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-m6tbs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-185074                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-pflxj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-185074              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-185074     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-8jh88                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-185074              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-wcj5m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-kvvqg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-185074 event: Registered Node no-preload-185074 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node no-preload-185074 event: Registered Node no-preload-185074 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [9327d2d4d2c27fea6986f3c244048b51916d2021ddd3fdcb8b7969c3248eb12d] <==
	{"level":"warn","ts":"2025-12-09T02:36:36.491307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.497243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.503315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.509466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.515709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.526736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.533062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.540052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.546437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.554446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.561961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.569332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.576166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.584193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.592942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.599331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.616859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.624221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.630045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.636912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.681492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51654","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T02:36:46.311851Z","caller":"traceutil/trace.go:172","msg":"trace[929441314] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"135.012398ms","start":"2025-12-09T02:36:46.176817Z","end":"2025-12-09T02:36:46.311830Z","steps":["trace[929441314] 'process raft request'  (duration: 134.962387ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:36:46.311903Z","caller":"traceutil/trace.go:172","msg":"trace[148916149] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"136.120279ms","start":"2025-12-09T02:36:46.175768Z","end":"2025-12-09T02:36:46.311888Z","steps":["trace[148916149] 'process raft request'  (duration: 104.156107ms)","trace[148916149] 'compare'  (duration: 31.760851ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-09T02:37:09.533929Z","caller":"traceutil/trace.go:172","msg":"trace[710132148] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"143.996949ms","start":"2025-12-09T02:37:09.389908Z","end":"2025-12-09T02:37:09.533905Z","steps":["trace[710132148] 'process raft request'  (duration: 63.645892ms)","trace[710132148] 'compare'  (duration: 80.220621ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-09T02:37:10.263538Z","caller":"traceutil/trace.go:172","msg":"trace[72109369] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"108.248508ms","start":"2025-12-09T02:37:10.155274Z","end":"2025-12-09T02:37:10.263522Z","steps":["trace[72109369] 'process raft request'  (duration: 108.143311ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:37:30 up  1:19,  0 user,  load average: 3.28, 2.64, 1.91
	Linux no-preload-185074 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5c6f48c9b6416c452f59edf0b90df1147d668c339e70df3ae54c128418ffbbff] <==
	I1209 02:36:38.626162       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:36:38.626422       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1209 02:36:38.626562       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:36:38.626578       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:36:38.626597       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:36:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:36:38.833180       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:36:38.924367       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:36:38.924388       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:36:38.924545       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:36:39.124530       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:36:39.124560       1 metrics.go:72] Registering metrics
	I1209 02:36:39.124629       1 controller.go:711] "Syncing nftables rules"
	I1209 02:36:48.832808       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:36:48.832882       1 main.go:301] handling current node
	I1209 02:36:58.832707       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:36:58.832772       1 main.go:301] handling current node
	I1209 02:37:08.832732       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:37:08.832770       1 main.go:301] handling current node
	I1209 02:37:18.838067       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:37:18.838106       1 main.go:301] handling current node
	I1209 02:37:28.834721       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:37:28.834768       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d5c8daf1abc24fba86bc53274918db2a9e734b6cddd581f0b30523f24811caab] <==
	I1209 02:36:37.160746       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 02:36:37.160752       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:36:37.160338       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1209 02:36:37.161177       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 02:36:37.161366       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:37.161448       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:37.161525       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:37.165176       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 02:36:37.167796       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 02:36:37.176392       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1209 02:36:37.181681       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:37.181702       1 policy_source.go:248] refreshing policies
	I1209 02:36:37.187835       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:36:37.218830       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:36:37.499040       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:36:37.525901       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:36:37.544416       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:36:37.551052       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:36:37.559345       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:36:37.590608       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.139.160"}
	I1209 02:36:37.602685       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.57.128"}
	I1209 02:36:38.063467       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1209 02:36:40.743760       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:36:40.844471       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:36:40.942960       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c5c4ce96abc06f5a5b23aafe5daf5879d64acdb88e8c6ffd8f7cf7c1ada39c1c] <==
	I1209 02:36:40.297676       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1209 02:36:40.297686       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:40.297693       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.297759       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1209 02:36:40.297871       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-185074"
	I1209 02:36:40.297984       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1209 02:36:40.298260       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.298289       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.298290       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.298332       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.299690       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.302764       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.303299       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:40.303736       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.303747       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.304359       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.304674       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.304833       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.308124       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.308197       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.315059       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.400349       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.400385       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:36:40.400392       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1209 02:36:40.405411       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [bf55bc97a247e88c685148e5cfafdc9f5a78f00ec8bc92045e9dccdb1872de23] <==
	I1209 02:36:38.433996       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:36:38.512782       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:38.613214       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:38.613292       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1209 02:36:38.613399       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:36:38.635120       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:36:38.635202       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:36:38.641375       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:36:38.641821       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:36:38.641858       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:38.643570       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:36:38.643579       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:36:38.643585       1 config.go:309] "Starting node config controller"
	I1209 02:36:38.643983       1 config.go:200] "Starting service config controller"
	I1209 02:36:38.644280       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:36:38.644483       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:36:38.644672       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:36:38.644722       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:36:38.644756       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:36:38.744512       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:36:38.745073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:36:38.745083       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0350088df68730310fdcf473d3556c4668d047069dccaa944bea1003c044ae64] <==
	I1209 02:36:35.720155       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:36:37.079588       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:36:37.079709       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:36:37.079778       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:36:37.079806       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:36:37.122891       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1209 02:36:37.134036       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:37.137895       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:37.137921       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:37.138580       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:36:37.138674       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:36:37.238157       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 09 02:36:52 no-preload-185074 kubelet[717]: E1209 02:36:52.099111     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wcj5m_kubernetes-dashboard(747e9c24-0d2e-428f-8c06-e6c9a1983799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" podUID="747e9c24-0d2e-428f-8c06-e6c9a1983799"
	Dec 09 02:36:54 no-preload-185074 kubelet[717]: E1209 02:36:54.316630     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:36:54 no-preload-185074 kubelet[717]: I1209 02:36:54.316716     717 scope.go:122] "RemoveContainer" containerID="aa3c7629b0d5509ec5dd3cd5f571b19ca503873b991743721998518767a69de9"
	Dec 09 02:36:54 no-preload-185074 kubelet[717]: E1209 02:36:54.316937     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wcj5m_kubernetes-dashboard(747e9c24-0d2e-428f-8c06-e6c9a1983799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" podUID="747e9c24-0d2e-428f-8c06-e6c9a1983799"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: E1209 02:37:02.025401     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: I1209 02:37:02.025440     717 scope.go:122] "RemoveContainer" containerID="aa3c7629b0d5509ec5dd3cd5f571b19ca503873b991743721998518767a69de9"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: I1209 02:37:02.124194     717 scope.go:122] "RemoveContainer" containerID="aa3c7629b0d5509ec5dd3cd5f571b19ca503873b991743721998518767a69de9"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: E1209 02:37:02.124429     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: I1209 02:37:02.124467     717 scope.go:122] "RemoveContainer" containerID="8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: E1209 02:37:02.124668     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wcj5m_kubernetes-dashboard(747e9c24-0d2e-428f-8c06-e6c9a1983799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" podUID="747e9c24-0d2e-428f-8c06-e6c9a1983799"
	Dec 09 02:37:04 no-preload-185074 kubelet[717]: E1209 02:37:04.317022     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:37:04 no-preload-185074 kubelet[717]: I1209 02:37:04.317059     717 scope.go:122] "RemoveContainer" containerID="8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27"
	Dec 09 02:37:04 no-preload-185074 kubelet[717]: E1209 02:37:04.317215     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wcj5m_kubernetes-dashboard(747e9c24-0d2e-428f-8c06-e6c9a1983799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" podUID="747e9c24-0d2e-428f-8c06-e6c9a1983799"
	Dec 09 02:37:09 no-preload-185074 kubelet[717]: I1209 02:37:09.144421     717 scope.go:122] "RemoveContainer" containerID="494db3b633a30f19d73a1257ed84c13a29e5bb941ce120fef01b27f9820ee9e9"
	Dec 09 02:37:12 no-preload-185074 kubelet[717]: E1209 02:37:12.746073     717 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-m6tbs" containerName="coredns"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: E1209 02:37:25.025146     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: I1209 02:37:25.025209     717 scope.go:122] "RemoveContainer" containerID="8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: I1209 02:37:25.191373     717 scope.go:122] "RemoveContainer" containerID="8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: E1209 02:37:25.191583     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: I1209 02:37:25.191611     717 scope.go:122] "RemoveContainer" containerID="31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: E1209 02:37:25.192466     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wcj5m_kubernetes-dashboard(747e9c24-0d2e-428f-8c06-e6c9a1983799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" podUID="747e9c24-0d2e-428f-8c06-e6c9a1983799"
	Dec 09 02:37:26 no-preload-185074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:37:26 no-preload-185074 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:37:26 no-preload-185074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 02:37:26 no-preload-185074 systemd[1]: kubelet.service: Consumed 1.652s CPU time.
	
	
	==> kubernetes-dashboard [5777f937bf741ba1dc62499f12167b6495deeedeca30041791c6f42d06337b5b] <==
	2025/12/09 02:36:45 Starting overwatch
	2025/12/09 02:36:45 Using namespace: kubernetes-dashboard
	2025/12/09 02:36:45 Using in-cluster config to connect to apiserver
	2025/12/09 02:36:45 Using secret token for csrf signing
	2025/12/09 02:36:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/09 02:36:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/09 02:36:45 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/09 02:36:45 Generating JWE encryption key
	2025/12/09 02:36:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/09 02:36:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/09 02:36:45 Initializing JWE encryption key from synchronized object
	2025/12/09 02:36:45 Creating in-cluster Sidecar client
	2025/12/09 02:36:45 Serving insecurely on HTTP port: 9090
	2025/12/09 02:36:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:37:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [494db3b633a30f19d73a1257ed84c13a29e5bb941ce120fef01b27f9820ee9e9] <==
	I1209 02:36:38.405709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:37:08.409616       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62] <==
	I1209 02:37:09.335513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:37:09.343693       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:37:09.343739       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:37:09.385830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:12.841719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:17.102236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:20.700981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:23.754516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:26.778022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:26.784416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:37:26.784589       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:37:26.784814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-185074_afeec6dd-f194-49f1-af33-4a155d4111bb!
	I1209 02:37:26.786002       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49513343-9b98-4fd9-a16e-c626e02acaeb", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-185074_afeec6dd-f194-49f1-af33-4a155d4111bb became leader
	W1209 02:37:26.804151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:26.808782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:37:26.885163       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-185074_afeec6dd-f194-49f1-af33-4a155d4111bb!
	W1209 02:37:28.812292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:28.818835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:30.822030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:30.870473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-185074 -n no-preload-185074
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-185074 -n no-preload-185074: exit status 2 (408.652028ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-185074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-185074
helpers_test.go:243: (dbg) docker inspect no-preload-185074:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75",
	        "Created": "2025-12-09T02:35:10.661104017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:36:28.677719107Z",
	            "FinishedAt": "2025-12-09T02:36:27.569456145Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/hostname",
	        "HostsPath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/hosts",
	        "LogPath": "/var/lib/docker/containers/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75/4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75-json.log",
	        "Name": "/no-preload-185074",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-185074:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-185074",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4597603e9b7ff87dc692bde66f75a7b0c02e112b873f8f022db00bb1a840df75",
	                "LowerDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7caecfbdc61d6b2599dbc5c558ed19ecc8fbdfd47dbdcf6a92f0ec7ee1a86746/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-185074",
	                "Source": "/var/lib/docker/volumes/no-preload-185074/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-185074",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-185074",
	                "name.minikube.sigs.k8s.io": "no-preload-185074",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c7af02eb60e8c90e83c2da8b5126338548ac6b7195fa71d4350b2fc28b5e611",
	            "SandboxKey": "/var/run/docker/netns/5c7af02eb60e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-185074": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d42fa8488d6e111dc575a4746973e4e3d2a7c9b8452ce6de734cd48ffe8b1bf7",
	                    "EndpointID": "0cdd9a2ed8c8ac8f127882c43dccab7cfd26b8c037eee8bb0cabbe602c13583e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "42:7e:8f:57:ac:ac",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-185074",
	                        "4597603e9b7f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185074 -n no-preload-185074
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185074 -n no-preload-185074: exit status 2 (364.891223ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-185074 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-185074 logs -n 25: (1.287430773s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-512414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable dashboard -p no-preload-185074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-828614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ stop    │ -p newest-cni-828614 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-828614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ image   │ newest-cni-828614 image list --format=json                                                                                                                                                                                                           │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ pause   │ -p newest-cni-828614 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p disable-driver-mounts-894253                                                                                                                                                                                                                      │ disable-driver-mounts-894253 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-485234           │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ old-k8s-version-126117 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p old-k8s-version-126117 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ default-k8s-diff-port-512414 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p default-k8s-diff-port-512414 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ delete  │ -p old-k8s-version-126117                                                                                                                                                                                                                            │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p old-k8s-version-126117                                                                                                                                                                                                                            │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ image   │ no-preload-185074 image list --format=json                                                                                                                                                                                                           │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p auto-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-933067                  │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ pause   │ -p no-preload-185074 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-512414                                                                                                                                                                                                                      │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p default-k8s-diff-port-512414                                                                                                                                                                                                                      │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:37:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:37:26.293783  319017 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:37:26.294004  319017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:26.294013  319017 out.go:374] Setting ErrFile to fd 2...
	I1209 02:37:26.294017  319017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:26.294206  319017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:37:26.294650  319017 out.go:368] Setting JSON to false
	I1209 02:37:26.295922  319017 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4795,"bootTime":1765243051,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:37:26.296007  319017 start.go:143] virtualization: kvm guest
	I1209 02:37:26.298044  319017 out.go:179] * [auto-933067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:37:26.299368  319017 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:37:26.299359  319017 notify.go:221] Checking for updates...
	I1209 02:37:26.300854  319017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:37:26.302711  319017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:37:26.306851  319017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:37:26.309967  319017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:37:26.311562  319017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:37:26.313254  319017 config.go:182] Loaded profile config "default-k8s-diff-port-512414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:26.313401  319017 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:26.313519  319017 config.go:182] Loaded profile config "no-preload-185074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:37:26.313658  319017 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:37:26.347431  319017 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:37:26.347599  319017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:26.432129  319017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-09 02:37:26.418722469 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:26.432257  319017 docker.go:319] overlay module found
	I1209 02:37:26.433978  319017 out.go:179] * Using the docker driver based on user configuration
	I1209 02:37:26.435164  319017 start.go:309] selected driver: docker
	I1209 02:37:26.435182  319017 start.go:927] validating driver "docker" against <nil>
	I1209 02:37:26.435213  319017 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:37:26.435986  319017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:26.516309  319017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-09 02:37:26.503075052 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:26.516538  319017 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:37:26.516846  319017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:37:26.518428  319017 out.go:179] * Using Docker driver with root privileges
	I1209 02:37:26.519628  319017 cni.go:84] Creating CNI manager for ""
	I1209 02:37:26.519785  319017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:26.519796  319017 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 02:37:26.519879  319017 start.go:353] cluster config:
	{Name:auto-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1209 02:37:26.521137  319017 out.go:179] * Starting "auto-933067" primary control-plane node in "auto-933067" cluster
	I1209 02:37:26.522276  319017 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:37:26.523817  319017 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:37:26.524859  319017 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:26.524896  319017 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:37:26.524908  319017 cache.go:65] Caching tarball of preloaded images
	I1209 02:37:26.525013  319017 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:37:26.525024  319017 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:37:26.525050  319017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:37:26.525146  319017 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/config.json ...
	I1209 02:37:26.525170  319017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/config.json: {Name:mk29fa4c3200084b6c0630be845930c23edecba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:26.574558  319017 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:37:26.574587  319017 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:37:26.574606  319017 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:37:26.574669  319017 start.go:360] acquireMachinesLock for auto-933067: {Name:mkd1bc5f16440871c0ae9380bbd77b0ebf91ff14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:37:26.574790  319017 start.go:364] duration metric: took 95.425µs to acquireMachinesLock for "auto-933067"
	I1209 02:37:26.574821  319017 start.go:93] Provisioning new machine with config: &{Name:auto-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-933067 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:37:26.574924  319017 start.go:125] createHost starting for "" (driver="docker")
	I1209 02:37:26.495340  312861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 02:37:26.495455  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:26.495490  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-485234 minikube.k8s.io/updated_at=2025_12_09T02_37_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=embed-certs-485234 minikube.k8s.io/primary=true
	I1209 02:37:26.509093  312861 ops.go:34] apiserver oom_adj: -16
	I1209 02:37:26.602800  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:27.103347  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:27.603397  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:28.102919  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:28.603949  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:29.103511  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:29.602901  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:30.103605  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:30.603881  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:31.102935  312861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:31.182166  312861 kubeadm.go:1114] duration metric: took 4.686769562s to wait for elevateKubeSystemPrivileges
	I1209 02:37:31.182210  312861 kubeadm.go:403] duration metric: took 16.177157626s to StartCluster
	I1209 02:37:31.182234  312861 settings.go:142] acquiring lock: {Name:mk9e9ae89c204c39718782586a8846a06bf7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:31.182310  312861 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:37:31.184343  312861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/kubeconfig: {Name:mkdb255fe00589d585bf0c5de8d363ebf8d1b6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:31.184592  312861 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:37:31.184724  312861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 02:37:31.184732  312861 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 02:37:31.184836  312861 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-485234"
	I1209 02:37:31.184864  312861 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-485234"
	I1209 02:37:31.184894  312861 host.go:66] Checking if "embed-certs-485234" exists ...
	I1209 02:37:31.184913  312861 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:31.184903  312861 addons.go:70] Setting default-storageclass=true in profile "embed-certs-485234"
	I1209 02:37:31.184992  312861 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-485234"
	I1209 02:37:31.185444  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:31.185451  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:31.186871  312861 out.go:179] * Verifying Kubernetes components...
	I1209 02:37:31.189058  312861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:31.212114  312861 addons.go:239] Setting addon default-storageclass=true in "embed-certs-485234"
	I1209 02:37:31.212168  312861 host.go:66] Checking if "embed-certs-485234" exists ...
	I1209 02:37:31.212619  312861 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:37:31.216419  312861 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 02:37:31.219532  312861 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:37:31.219551  312861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 02:37:31.219610  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:31.243834  312861 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 02:37:31.243859  312861 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 02:37:31.243925  312861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:37:31.246867  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:26.576727  319017 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:37:26.577286  319017 start.go:159] libmachine.API.Create for "auto-933067" (driver="docker")
	I1209 02:37:26.577396  319017 client.go:173] LocalClient.Create starting
	I1209 02:37:26.577510  319017 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:37:26.577550  319017 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:26.577570  319017 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:26.577628  319017 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:37:26.577687  319017 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:26.577701  319017 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:26.578334  319017 cli_runner.go:164] Run: docker network inspect auto-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:37:26.603676  319017 cli_runner.go:211] docker network inspect auto-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:37:26.603765  319017 network_create.go:284] running [docker network inspect auto-933067] to gather additional debugging logs...
	I1209 02:37:26.603786  319017 cli_runner.go:164] Run: docker network inspect auto-933067
	W1209 02:37:26.626284  319017 cli_runner.go:211] docker network inspect auto-933067 returned with exit code 1
	I1209 02:37:26.626325  319017 network_create.go:287] error running [docker network inspect auto-933067]: docker network inspect auto-933067: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-933067 not found
	I1209 02:37:26.626346  319017 network_create.go:289] output of [docker network inspect auto-933067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-933067 not found
	
	** /stderr **
	I1209 02:37:26.626478  319017 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:26.646546  319017 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:37:26.647339  319017 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:37:26.648110  319017 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:37:26.648782  319017 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e16439d105c6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:ee:5c:7c:6c:62} reservation:<nil>}
	I1209 02:37:26.649748  319017 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e83670}
	I1209 02:37:26.649779  319017 network_create.go:124] attempt to create docker network auto-933067 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1209 02:37:26.649820  319017 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-933067 auto-933067
	I1209 02:37:26.703528  319017 network_create.go:108] docker network auto-933067 192.168.85.0/24 created
	I1209 02:37:26.703552  319017 kic.go:121] calculated static IP "192.168.85.2" for the "auto-933067" container
	I1209 02:37:26.703608  319017 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:37:26.723005  319017 cli_runner.go:164] Run: docker volume create auto-933067 --label name.minikube.sigs.k8s.io=auto-933067 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:37:26.748752  319017 oci.go:103] Successfully created a docker volume auto-933067
	I1209 02:37:26.748861  319017 cli_runner.go:164] Run: docker run --rm --name auto-933067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-933067 --entrypoint /usr/bin/test -v auto-933067:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:37:27.191021  319017 oci.go:107] Successfully prepared a docker volume auto-933067
	I1209 02:37:27.191105  319017 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:27.191122  319017 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 02:37:27.191188  319017 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-933067:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 02:37:30.903170  319017 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-933067:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.711936029s)
	I1209 02:37:30.903204  319017 kic.go:203] duration metric: took 3.712078514s to extract preloaded images to volume ...
	W1209 02:37:30.903298  319017 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:37:30.903374  319017 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:37:30.903430  319017 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:37:30.969494  319017 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-933067 --name auto-933067 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-933067 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-933067 --network auto-933067 --ip 192.168.85.2 --volume auto-933067:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 02:37:31.266863  312861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:37:31.297939  312861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 02:37:31.354476  312861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:37:31.371688  312861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 02:37:31.387711  312861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 02:37:31.508358  312861 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1209 02:37:31.510073  312861 node_ready.go:35] waiting up to 6m0s for node "embed-certs-485234" to be "Ready" ...
	I1209 02:37:31.755024  312861 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Dec 09 02:37:02 no-preload-185074 crio[566]: time="2025-12-09T02:37:02.073286571Z" level=info msg="Started container" PID=1753 containerID=8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper id=bc623516-aacf-4906-878f-9377f5cb4e91 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e2cdaa80919e11fe81c5a9b9db6fc9806032322eaed3b362bc9750b6243253c
	Dec 09 02:37:02 no-preload-185074 crio[566]: time="2025-12-09T02:37:02.12549655Z" level=info msg="Removing container: aa3c7629b0d5509ec5dd3cd5f571b19ca503873b991743721998518767a69de9" id=3988f03a-7112-4cc1-af77-6a9b49f15c87 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:02 no-preload-185074 crio[566]: time="2025-12-09T02:37:02.13520842Z" level=info msg="Removed container aa3c7629b0d5509ec5dd3cd5f571b19ca503873b991743721998518767a69de9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper" id=3988f03a-7112-4cc1-af77-6a9b49f15c87 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.144873076Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=554b6d0a-a15d-4a4c-af53-4d5557ce8148 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.145714341Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c1ffe817-bf87-44f6-941b-a63c583b01f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.151625291Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ec765275-44c8-4e81-a3ed-c4cb56a3a5e1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.151791613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.234003838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.234221648Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/03aa2fd056e712dbbe40c5b18c12c73793df5b996ee9f54d4a491a298b6ae609/merged/etc/passwd: no such file or directory"
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.234249899Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/03aa2fd056e712dbbe40c5b18c12c73793df5b996ee9f54d4a491a298b6ae609/merged/etc/group: no such file or directory"
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.234567018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.317545117Z" level=info msg="Created container b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62: kube-system/storage-provisioner/storage-provisioner" id=ec765275-44c8-4e81-a3ed-c4cb56a3a5e1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.318359252Z" level=info msg="Starting container: b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62" id=d959f225-de92-498b-a4b5-a0f87d664cb6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:09 no-preload-185074 crio[566]: time="2025-12-09T02:37:09.320712695Z" level=info msg="Started container" PID=1767 containerID=b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62 description=kube-system/storage-provisioner/storage-provisioner id=d959f225-de92-498b-a4b5-a0f87d664cb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2b1eada05ea7a64c76f8127d3a6cc44a8203770a338d75d47a66b37daf9ffae
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.025752236Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fc3b72a0-b3c0-4119-8b24-896894793732 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.026682947Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=14afd3e2-22fe-424d-aec1-e6d365efa92f name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.027777012Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper" id=7e7ea59e-494b-4444-bcfd-ad706cd5a82e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.027920766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.034981301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.035628402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.068871817Z" level=info msg="Created container 31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper" id=7e7ea59e-494b-4444-bcfd-ad706cd5a82e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.0708641Z" level=info msg="Starting container: 31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091" id=9fc6905d-4366-4b08-80c2-db72de395326 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.0737067Z" level=info msg="Started container" PID=1805 containerID=31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper id=9fc6905d-4366-4b08-80c2-db72de395326 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e2cdaa80919e11fe81c5a9b9db6fc9806032322eaed3b362bc9750b6243253c
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.1963657Z" level=info msg="Removing container: 8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27" id=1e5fa587-5306-4972-9c85-c3d6469c055e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:37:25 no-preload-185074 crio[566]: time="2025-12-09T02:37:25.206694823Z" level=info msg="Removed container 8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m/dashboard-metrics-scraper" id=1e5fa587-5306-4972-9c85-c3d6469c055e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	31e834b685315       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   8e2cdaa80919e       dashboard-metrics-scraper-867fb5f87b-wcj5m   kubernetes-dashboard
	b9e869a3748a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   c2b1eada05ea7       storage-provisioner                          kube-system
	5777f937bf741       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   25a0e48c8e076       kubernetes-dashboard-b84665fb8-kvvqg         kubernetes-dashboard
	a58ef5a8f6714       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   4154567bd2799       busybox                                      default
	44c225d6c3091       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   b46d3bee9d461       coredns-7d764666f9-m6tbs                     kube-system
	bf55bc97a247e       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           54 seconds ago      Running             kube-proxy                  0                   f6007ecb55102       kube-proxy-8jh88                             kube-system
	494db3b633a30       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   c2b1eada05ea7       storage-provisioner                          kube-system
	5c6f48c9b6416       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   9c7f257169cd5       kindnet-pflxj                                kube-system
	c5c4ce96abc06       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           57 seconds ago      Running             kube-controller-manager     0                   2456b18a023f2       kube-controller-manager-no-preload-185074    kube-system
	9327d2d4d2c27       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   f8f093ec36a9f       etcd-no-preload-185074                       kube-system
	d5c8daf1abc24       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           57 seconds ago      Running             kube-apiserver              0                   a0ed431046c9f       kube-apiserver-no-preload-185074             kube-system
	0350088df6873       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           57 seconds ago      Running             kube-scheduler              0                   09c86933674cd       kube-scheduler-no-preload-185074             kube-system
	
	
	==> coredns [44c225d6c3091f7459da4f454564761e1a7750a4d458862d0a09b0cddffed80e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37001 - 50335 "HINFO IN 3261395760700908320.5592896465674935216. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.105687385s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-185074
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-185074
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=no-preload-185074
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_35_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:35:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-185074
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:37:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:37:07 +0000   Tue, 09 Dec 2025 02:35:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:37:07 +0000   Tue, 09 Dec 2025 02:35:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:37:07 +0000   Tue, 09 Dec 2025 02:35:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:37:07 +0000   Tue, 09 Dec 2025 02:35:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-185074
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                bea297a5-f68c-4ca1-862a-f85a9f2be474
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-m6tbs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-185074                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-pflxj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-185074              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-185074     200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-8jh88                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-185074              100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-wcj5m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-kvvqg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node no-preload-185074 event: Registered Node no-preload-185074 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-185074 event: Registered Node no-preload-185074 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [9327d2d4d2c27fea6986f3c244048b51916d2021ddd3fdcb8b7969c3248eb12d] <==
	{"level":"warn","ts":"2025-12-09T02:36:36.491307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.497243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.503315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.509466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.515709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.526736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.533062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.540052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.546437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.554446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.561961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.569332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.576166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.584193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.592942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.599331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.616859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.624221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.630045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.636912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:36:36.681492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51654","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T02:36:46.311851Z","caller":"traceutil/trace.go:172","msg":"trace[929441314] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"135.012398ms","start":"2025-12-09T02:36:46.176817Z","end":"2025-12-09T02:36:46.311830Z","steps":["trace[929441314] 'process raft request'  (duration: 134.962387ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:36:46.311903Z","caller":"traceutil/trace.go:172","msg":"trace[148916149] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"136.120279ms","start":"2025-12-09T02:36:46.175768Z","end":"2025-12-09T02:36:46.311888Z","steps":["trace[148916149] 'process raft request'  (duration: 104.156107ms)","trace[148916149] 'compare'  (duration: 31.760851ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-09T02:37:09.533929Z","caller":"traceutil/trace.go:172","msg":"trace[710132148] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"143.996949ms","start":"2025-12-09T02:37:09.389908Z","end":"2025-12-09T02:37:09.533905Z","steps":["trace[710132148] 'process raft request'  (duration: 63.645892ms)","trace[710132148] 'compare'  (duration: 80.220621ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-09T02:37:10.263538Z","caller":"traceutil/trace.go:172","msg":"trace[72109369] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"108.248508ms","start":"2025-12-09T02:37:10.155274Z","end":"2025-12-09T02:37:10.263522Z","steps":["trace[72109369] 'process raft request'  (duration: 108.143311ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:37:33 up  1:20,  0 user,  load average: 3.26, 2.65, 1.91
	Linux no-preload-185074 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5c6f48c9b6416c452f59edf0b90df1147d668c339e70df3ae54c128418ffbbff] <==
	I1209 02:36:38.626162       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:36:38.626422       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1209 02:36:38.626562       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:36:38.626578       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:36:38.626597       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:36:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:36:38.833180       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:36:38.924367       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:36:38.924388       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:36:38.924545       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:36:39.124530       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:36:39.124560       1 metrics.go:72] Registering metrics
	I1209 02:36:39.124629       1 controller.go:711] "Syncing nftables rules"
	I1209 02:36:48.832808       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:36:48.832882       1 main.go:301] handling current node
	I1209 02:36:58.832707       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:36:58.832772       1 main.go:301] handling current node
	I1209 02:37:08.832732       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:37:08.832770       1 main.go:301] handling current node
	I1209 02:37:18.838067       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:37:18.838106       1 main.go:301] handling current node
	I1209 02:37:28.834721       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1209 02:37:28.834768       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d5c8daf1abc24fba86bc53274918db2a9e734b6cddd581f0b30523f24811caab] <==
	I1209 02:36:37.160746       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 02:36:37.160752       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:36:37.160338       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1209 02:36:37.161177       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 02:36:37.161366       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:37.161448       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:37.161525       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:37.165176       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 02:36:37.167796       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 02:36:37.176392       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1209 02:36:37.181681       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:37.181702       1 policy_source.go:248] refreshing policies
	I1209 02:36:37.187835       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:36:37.218830       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:36:37.499040       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:36:37.525901       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:36:37.544416       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:36:37.551052       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:36:37.559345       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:36:37.590608       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.139.160"}
	I1209 02:36:37.602685       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.57.128"}
	I1209 02:36:38.063467       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1209 02:36:40.743760       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:36:40.844471       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:36:40.942960       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c5c4ce96abc06f5a5b23aafe5daf5879d64acdb88e8c6ffd8f7cf7c1ada39c1c] <==
	I1209 02:36:40.297676       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1209 02:36:40.297686       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:40.297693       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.297759       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1209 02:36:40.297871       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-185074"
	I1209 02:36:40.297984       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1209 02:36:40.298260       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.298289       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.298290       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.298332       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.299690       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.302764       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.303299       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:40.303736       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.303747       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.304359       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.304674       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.304833       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.308124       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.308197       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.315059       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.400349       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:40.400385       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:36:40.400392       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1209 02:36:40.405411       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [bf55bc97a247e88c685148e5cfafdc9f5a78f00ec8bc92045e9dccdb1872de23] <==
	I1209 02:36:38.433996       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:36:38.512782       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:38.613214       1 shared_informer.go:377] "Caches are synced"
	I1209 02:36:38.613292       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1209 02:36:38.613399       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:36:38.635120       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:36:38.635202       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:36:38.641375       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:36:38.641821       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:36:38.641858       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:38.643570       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:36:38.643579       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:36:38.643585       1 config.go:309] "Starting node config controller"
	I1209 02:36:38.643983       1 config.go:200] "Starting service config controller"
	I1209 02:36:38.644280       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:36:38.644483       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:36:38.644672       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:36:38.644722       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:36:38.644756       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:36:38.744512       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:36:38.745073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:36:38.745083       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0350088df68730310fdcf473d3556c4668d047069dccaa944bea1003c044ae64] <==
	I1209 02:36:35.720155       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:36:37.079588       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:36:37.079709       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:36:37.079778       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:36:37.079806       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:36:37.122891       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1209 02:36:37.134036       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:36:37.137895       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:36:37.137921       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:36:37.138580       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:36:37.138674       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:36:37.238157       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 09 02:36:52 no-preload-185074 kubelet[717]: E1209 02:36:52.099111     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wcj5m_kubernetes-dashboard(747e9c24-0d2e-428f-8c06-e6c9a1983799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" podUID="747e9c24-0d2e-428f-8c06-e6c9a1983799"
	Dec 09 02:36:54 no-preload-185074 kubelet[717]: E1209 02:36:54.316630     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:36:54 no-preload-185074 kubelet[717]: I1209 02:36:54.316716     717 scope.go:122] "RemoveContainer" containerID="aa3c7629b0d5509ec5dd3cd5f571b19ca503873b991743721998518767a69de9"
	Dec 09 02:36:54 no-preload-185074 kubelet[717]: E1209 02:36:54.316937     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wcj5m_kubernetes-dashboard(747e9c24-0d2e-428f-8c06-e6c9a1983799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" podUID="747e9c24-0d2e-428f-8c06-e6c9a1983799"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: E1209 02:37:02.025401     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: I1209 02:37:02.025440     717 scope.go:122] "RemoveContainer" containerID="aa3c7629b0d5509ec5dd3cd5f571b19ca503873b991743721998518767a69de9"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: I1209 02:37:02.124194     717 scope.go:122] "RemoveContainer" containerID="aa3c7629b0d5509ec5dd3cd5f571b19ca503873b991743721998518767a69de9"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: E1209 02:37:02.124429     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: I1209 02:37:02.124467     717 scope.go:122] "RemoveContainer" containerID="8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27"
	Dec 09 02:37:02 no-preload-185074 kubelet[717]: E1209 02:37:02.124668     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wcj5m_kubernetes-dashboard(747e9c24-0d2e-428f-8c06-e6c9a1983799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" podUID="747e9c24-0d2e-428f-8c06-e6c9a1983799"
	Dec 09 02:37:04 no-preload-185074 kubelet[717]: E1209 02:37:04.317022     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:37:04 no-preload-185074 kubelet[717]: I1209 02:37:04.317059     717 scope.go:122] "RemoveContainer" containerID="8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27"
	Dec 09 02:37:04 no-preload-185074 kubelet[717]: E1209 02:37:04.317215     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wcj5m_kubernetes-dashboard(747e9c24-0d2e-428f-8c06-e6c9a1983799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" podUID="747e9c24-0d2e-428f-8c06-e6c9a1983799"
	Dec 09 02:37:09 no-preload-185074 kubelet[717]: I1209 02:37:09.144421     717 scope.go:122] "RemoveContainer" containerID="494db3b633a30f19d73a1257ed84c13a29e5bb941ce120fef01b27f9820ee9e9"
	Dec 09 02:37:12 no-preload-185074 kubelet[717]: E1209 02:37:12.746073     717 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-m6tbs" containerName="coredns"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: E1209 02:37:25.025146     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: I1209 02:37:25.025209     717 scope.go:122] "RemoveContainer" containerID="8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: I1209 02:37:25.191373     717 scope.go:122] "RemoveContainer" containerID="8a878551b0235124b4168673daf488cd41289cc76cc570b1aeb76bf3fd965d27"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: E1209 02:37:25.191583     717 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" containerName="dashboard-metrics-scraper"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: I1209 02:37:25.191611     717 scope.go:122] "RemoveContainer" containerID="31e834b68531581cb2e391fd999367554113a962e7f22ce981e5e28074942091"
	Dec 09 02:37:25 no-preload-185074 kubelet[717]: E1209 02:37:25.192466     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wcj5m_kubernetes-dashboard(747e9c24-0d2e-428f-8c06-e6c9a1983799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wcj5m" podUID="747e9c24-0d2e-428f-8c06-e6c9a1983799"
	Dec 09 02:37:26 no-preload-185074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:37:26 no-preload-185074 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:37:26 no-preload-185074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 02:37:26 no-preload-185074 systemd[1]: kubelet.service: Consumed 1.652s CPU time.
	
	
	==> kubernetes-dashboard [5777f937bf741ba1dc62499f12167b6495deeedeca30041791c6f42d06337b5b] <==
	2025/12/09 02:36:45 Starting overwatch
	2025/12/09 02:36:45 Using namespace: kubernetes-dashboard
	2025/12/09 02:36:45 Using in-cluster config to connect to apiserver
	2025/12/09 02:36:45 Using secret token for csrf signing
	2025/12/09 02:36:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/09 02:36:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/09 02:36:45 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/09 02:36:45 Generating JWE encryption key
	2025/12/09 02:36:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/09 02:36:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/09 02:36:45 Initializing JWE encryption key from synchronized object
	2025/12/09 02:36:45 Creating in-cluster Sidecar client
	2025/12/09 02:36:45 Serving insecurely on HTTP port: 9090
	2025/12/09 02:36:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:37:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [494db3b633a30f19d73a1257ed84c13a29e5bb941ce120fef01b27f9820ee9e9] <==
	I1209 02:36:38.405709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:37:08.409616       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b9e869a3748a9413364672aa430aac969773cc6aae42edb96687a03a2a6bfe62] <==
	I1209 02:37:09.335513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:37:09.343693       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:37:09.343739       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:37:09.385830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:12.841719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:17.102236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:20.700981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:23.754516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:26.778022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:26.784416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:37:26.784589       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:37:26.784814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-185074_afeec6dd-f194-49f1-af33-4a155d4111bb!
	I1209 02:37:26.786002       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49513343-9b98-4fd9-a16e-c626e02acaeb", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-185074_afeec6dd-f194-49f1-af33-4a155d4111bb became leader
	W1209 02:37:26.804151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:26.808782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:37:26.885163       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-185074_afeec6dd-f194-49f1-af33-4a155d4111bb!
	W1209 02:37:28.812292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:28.818835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:30.822030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:30.870473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:32.874136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:32.878001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-185074 -n no-preload-185074
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-185074 -n no-preload-185074: exit status 2 (366.780987ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-185074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-485234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-485234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (278.469186ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:37:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-485234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-485234 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-485234 describe deploy/metrics-server -n kube-system: exit status 1 (57.844563ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-485234 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-485234
helpers_test.go:243: (dbg) docker inspect embed-certs-485234:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a",
	        "Created": "2025-12-09T02:37:10.901046477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313594,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:37:10.938853383Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/hosts",
	        "LogPath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a-json.log",
	        "Name": "/embed-certs-485234",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-485234:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-485234",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a",
	                "LowerDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004/merged",
	                "UpperDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004/diff",
	                "WorkDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-485234",
	                "Source": "/var/lib/docker/volumes/embed-certs-485234/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-485234",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-485234",
	                "name.minikube.sigs.k8s.io": "embed-certs-485234",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fc76c7673366c16bbe4cbf0d60452e09dc572255732aece47455f7cd75d26599",
	            "SandboxKey": "/var/run/docker/netns/fc76c7673366",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-485234": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "65c970efd44f13df8727d193873c6259ce2c56f73ef1221ef78d5983f99951ba",
	                    "EndpointID": "926c95579e576d7e81cd6381c398e714aa251fbb99986b51f3f9922de7b27217",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "4e:31:5f:7e:70:90",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-485234",
	                        "2220a87a1394"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-485234 -n embed-certs-485234
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-485234 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-485234 logs -n 25: (1.066645928s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p newest-cni-828614 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-828614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ start   │ -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ image   │ newest-cni-828614 image list --format=json                                                                                                                                                                                                           │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │ 09 Dec 25 02:36 UTC │
	│ pause   │ -p newest-cni-828614 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:36 UTC │                     │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p newest-cni-828614                                                                                                                                                                                                                                 │ newest-cni-828614            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p disable-driver-mounts-894253                                                                                                                                                                                                                      │ disable-driver-mounts-894253 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-485234           │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ image   │ old-k8s-version-126117 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p old-k8s-version-126117 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ image   │ default-k8s-diff-port-512414 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ pause   │ -p default-k8s-diff-port-512414 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ delete  │ -p old-k8s-version-126117                                                                                                                                                                                                                            │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p old-k8s-version-126117                                                                                                                                                                                                                            │ old-k8s-version-126117       │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ image   │ no-preload-185074 image list --format=json                                                                                                                                                                                                           │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p auto-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-933067                  │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ pause   │ -p no-preload-185074 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-512414                                                                                                                                                                                                                      │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p default-k8s-diff-port-512414                                                                                                                                                                                                                      │ default-k8s-diff-port-512414 │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p kindnet-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-933067               │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ delete  │ -p no-preload-185074                                                                                                                                                                                                                                 │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ delete  │ -p no-preload-185074                                                                                                                                                                                                                                 │ no-preload-185074            │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │ 09 Dec 25 02:37 UTC │
	│ start   │ -p calico-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                               │ calico-933067                │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-485234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-485234           │ jenkins │ v1.37.0 │ 09 Dec 25 02:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:37:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:37:38.308555  325211 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:37:38.308692  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:38.308703  325211 out.go:374] Setting ErrFile to fd 2...
	I1209 02:37:38.308710  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:37:38.309003  325211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:37:38.309584  325211 out.go:368] Setting JSON to false
	I1209 02:37:38.311042  325211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4807,"bootTime":1765243051,"procs":420,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:37:38.311100  325211 start.go:143] virtualization: kvm guest
	I1209 02:37:38.313009  325211 out.go:179] * [calico-933067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:37:38.314120  325211 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:37:38.314128  325211 notify.go:221] Checking for updates...
	I1209 02:37:38.316026  325211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:37:38.317196  325211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:37:38.318331  325211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:37:38.319502  325211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:37:38.320625  325211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:37:38.322041  325211 config.go:182] Loaded profile config "auto-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:38.322164  325211 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:38.322291  325211 config.go:182] Loaded profile config "kindnet-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:38.322403  325211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:37:38.348775  325211 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:37:38.348895  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:38.408688  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-09 02:37:38.397871202 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:38.408818  325211 docker.go:319] overlay module found
	I1209 02:37:38.410523  325211 out.go:179] * Using the docker driver based on user configuration
	I1209 02:37:38.411762  325211 start.go:309] selected driver: docker
	I1209 02:37:38.411783  325211 start.go:927] validating driver "docker" against <nil>
	I1209 02:37:38.411797  325211 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:37:38.412551  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:37:38.473083  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-09 02:37:38.462618004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:37:38.473346  325211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:37:38.473541  325211 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:37:38.476752  325211 out.go:179] * Using Docker driver with root privileges
	I1209 02:37:38.477746  325211 cni.go:84] Creating CNI manager for "calico"
	I1209 02:37:38.477762  325211 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1209 02:37:38.477833  325211 start.go:353] cluster config:
	{Name:calico-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:38.479066  325211 out.go:179] * Starting "calico-933067" primary control-plane node in "calico-933067" cluster
	I1209 02:37:38.480172  325211 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:37:38.481314  325211 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:37:38.482417  325211 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:38.482447  325211 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:37:38.482454  325211 cache.go:65] Caching tarball of preloaded images
	I1209 02:37:38.482454  325211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:37:38.482563  325211 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:37:38.482580  325211 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:37:38.482728  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/config.json ...
	I1209 02:37:38.482755  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/config.json: {Name:mk2ae817915d203b7004126d3f5b417efda86921 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:38.504880  325211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:37:38.504904  325211 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:37:38.504936  325211 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:37:38.504975  325211 start.go:360] acquireMachinesLock for calico-933067: {Name:mk44b1ed283fc3130a324e52f8ba3bae1d3c8671 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:37:38.505082  325211 start.go:364] duration metric: took 85.745µs to acquireMachinesLock for "calico-933067"
	I1209 02:37:38.505113  325211 start.go:93] Provisioning new machine with config: &{Name:calico-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-933067 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:37:38.505212  325211 start.go:125] createHost starting for "" (driver="docker")
	I1209 02:37:37.036125  319017 cli_runner.go:164] Run: docker network inspect auto-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:37.056009  319017 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1209 02:37:37.060169  319017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:37.070771  319017 kubeadm.go:884] updating cluster {Name:auto-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:37:37.070888  319017 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:37.070931  319017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:37.103521  319017 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:37.103543  319017 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:37:37.103584  319017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:37.135414  319017 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:37.135435  319017 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:37:37.135445  319017 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1209 02:37:37.135553  319017 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-933067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 02:37:37.135613  319017 ssh_runner.go:195] Run: crio config
	I1209 02:37:37.190362  319017 cni.go:84] Creating CNI manager for ""
	I1209 02:37:37.190387  319017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:37.190410  319017 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:37:37.190440  319017 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-933067 NodeName:auto-933067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:37:37.190622  319017 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-933067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:37:37.190698  319017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 02:37:37.199870  319017 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:37:37.199934  319017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:37:37.207812  319017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1209 02:37:37.221390  319017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:37:37.242038  319017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1209 02:37:37.257345  319017 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:37:37.261505  319017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:37.273372  319017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:37.373762  319017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:37:37.399324  319017 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067 for IP: 192.168.85.2
	I1209 02:37:37.399352  319017 certs.go:195] generating shared ca certs ...
	I1209 02:37:37.399371  319017 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:37.399521  319017 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:37:37.399579  319017 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:37:37.399595  319017 certs.go:257] generating profile certs ...
	I1209 02:37:37.399677  319017 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/client.key
	I1209 02:37:37.399697  319017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/client.crt with IP's: []
	I1209 02:37:37.520808  319017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/client.crt ...
	I1209 02:37:37.520835  319017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/client.crt: {Name:mk4e265eed97edfebddfd8a9b66307f931513008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:37.521190  319017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/client.key ...
	I1209 02:37:37.521213  319017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/client.key: {Name:mkdc9a61c1c2cdd71dadfb56b723a1444a87a247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:37.521335  319017 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.key.db4179d6
	I1209 02:37:37.521357  319017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.crt.db4179d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1209 02:37:37.606712  319017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.crt.db4179d6 ...
	I1209 02:37:37.606752  319017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.crt.db4179d6: {Name:mk81dc7f130bf8b8b78f00f7a802ae21211af47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:37.606960  319017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.key.db4179d6 ...
	I1209 02:37:37.607014  319017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.key.db4179d6: {Name:mkf3a7d9e54e3a845dda6e9d23a42acae1a83892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:37.607146  319017 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.crt.db4179d6 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.crt
	I1209 02:37:37.607275  319017 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.key.db4179d6 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.key
	I1209 02:37:37.607364  319017 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/proxy-client.key
	I1209 02:37:37.607384  319017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/proxy-client.crt with IP's: []
	I1209 02:37:37.819002  319017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/proxy-client.crt ...
	I1209 02:37:37.819031  319017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/proxy-client.crt: {Name:mk91da42627c9327ea0ac8a5ee562ea485bf4635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:37.819188  319017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/proxy-client.key ...
	I1209 02:37:37.819207  319017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/proxy-client.key: {Name:mk73960a11ae274221bcbe0df43971d2d54b2e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:37.819455  319017 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:37:37.819503  319017 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:37:37.819515  319017 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:37:37.819546  319017 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:37:37.819577  319017 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:37:37.819606  319017 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:37:37.819674  319017 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:37.820410  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:37:37.841995  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:37:37.864770  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:37:37.900160  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:37:37.920197  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1209 02:37:37.940239  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:37:37.958284  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:37:37.978802  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/auto-933067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:37:37.998339  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:37:38.018535  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:37:38.037281  319017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:37:38.056055  319017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:37:38.069479  319017 ssh_runner.go:195] Run: openssl version
	I1209 02:37:38.076359  319017 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:38.084842  319017 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:37:38.092670  319017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:38.096883  319017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:38.096927  319017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:38.135058  319017 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:37:38.143541  319017 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 02:37:38.152350  319017 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:37:38.160623  319017 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:37:38.168141  319017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:37:38.172543  319017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:37:38.172593  319017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:37:38.218831  319017 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:37:38.227145  319017 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/51391683.0
	I1209 02:37:38.235101  319017 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:37:38.243176  319017 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:37:38.250890  319017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:37:38.255105  319017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:37:38.255156  319017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:37:38.291940  319017 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:38.300768  319017 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145522.pem /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:38.308443  319017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:37:38.312441  319017 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 02:37:38.312495  319017 kubeadm.go:401] StartCluster: {Name:auto-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:38.312563  319017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:37:38.312596  319017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:37:38.342356  319017 cri.go:89] found id: ""
	I1209 02:37:38.342426  319017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:37:38.351856  319017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 02:37:38.359527  319017 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 02:37:38.359577  319017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 02:37:38.368527  319017 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 02:37:38.368543  319017 kubeadm.go:158] found existing configuration files:
	
	I1209 02:37:38.368597  319017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 02:37:38.377033  319017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 02:37:38.377078  319017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 02:37:38.385743  319017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 02:37:38.394625  319017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 02:37:38.394687  319017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 02:37:38.402845  319017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 02:37:38.410941  319017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 02:37:38.410997  319017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 02:37:38.418651  319017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 02:37:38.427115  319017 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 02:37:38.427166  319017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 02:37:38.437379  319017 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 02:37:38.484425  319017 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 02:37:38.484519  319017 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 02:37:38.507276  319017 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 02:37:38.507360  319017 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 02:37:38.507410  319017 kubeadm.go:319] OS: Linux
	I1209 02:37:38.507470  319017 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 02:37:38.507535  319017 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 02:37:38.507595  319017 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 02:37:38.507678  319017 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 02:37:38.507741  319017 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 02:37:38.507796  319017 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 02:37:38.507867  319017 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 02:37:38.507939  319017 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 02:37:38.575769  319017 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 02:37:38.575972  319017 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 02:37:38.576124  319017 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 02:37:38.585939  319017 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1209 02:37:38.513558  312861 node_ready.go:57] node "embed-certs-485234" has "Ready":"False" status (will retry)
	W1209 02:37:40.513616  312861 node_ready.go:57] node "embed-certs-485234" has "Ready":"False" status (will retry)
	I1209 02:37:38.588367  319017 out.go:252]   - Generating certificates and keys ...
	I1209 02:37:38.588480  319017 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 02:37:38.588581  319017 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 02:37:38.926004  319017 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 02:37:39.093051  319017 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 02:37:39.415586  319017 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 02:37:39.585143  319017 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 02:37:39.758122  319017 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 02:37:39.758358  319017 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-933067 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1209 02:37:40.516824  319017 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 02:37:40.517094  319017 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-933067 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1209 02:37:41.046182  319017 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 02:37:37.271507  321981 cli_runner.go:164] Run: docker exec kindnet-933067 stat /var/lib/dpkg/alternatives/iptables
	I1209 02:37:37.322508  321981 oci.go:144] the created container "kindnet-933067" has a running status.
	I1209 02:37:37.322537  321981 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/kindnet-933067/id_rsa...
	I1209 02:37:37.427491  321981 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/kindnet-933067/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 02:37:37.821247  321981 cli_runner.go:164] Run: docker container inspect kindnet-933067 --format={{.State.Status}}
	I1209 02:37:37.842948  321981 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 02:37:37.842970  321981 kic_runner.go:114] Args: [docker exec --privileged kindnet-933067 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 02:37:37.890933  321981 cli_runner.go:164] Run: docker container inspect kindnet-933067 --format={{.State.Status}}
	I1209 02:37:37.912184  321981 machine.go:94] provisionDockerMachine start ...
	I1209 02:37:37.912270  321981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-933067
	I1209 02:37:37.931530  321981 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:37.931902  321981 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1209 02:37:37.931934  321981 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:37:38.067337  321981 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-933067
	
	I1209 02:37:38.067363  321981 ubuntu.go:182] provisioning hostname "kindnet-933067"
	I1209 02:37:38.067439  321981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-933067
	I1209 02:37:38.089213  321981 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:38.089536  321981 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1209 02:37:38.089573  321981 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-933067 && echo "kindnet-933067" | sudo tee /etc/hostname
	I1209 02:37:38.233236  321981 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-933067
	
	I1209 02:37:38.233307  321981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-933067
	I1209 02:37:38.252295  321981 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:38.252506  321981 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1209 02:37:38.252531  321981 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-933067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-933067/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-933067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:37:38.385403  321981 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:37:38.385460  321981 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:37:38.385487  321981 ubuntu.go:190] setting up certificates
	I1209 02:37:38.385502  321981 provision.go:84] configureAuth start
	I1209 02:37:38.385620  321981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-933067
	I1209 02:37:38.407040  321981 provision.go:143] copyHostCerts
	I1209 02:37:38.407091  321981 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:37:38.407104  321981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:37:38.407174  321981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:37:38.407305  321981 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:37:38.407319  321981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:37:38.407361  321981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:37:38.407456  321981 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:37:38.407468  321981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:37:38.407510  321981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:37:38.407595  321981 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.kindnet-933067 san=[127.0.0.1 192.168.76.2 kindnet-933067 localhost minikube]
	I1209 02:37:38.433037  321981 provision.go:177] copyRemoteCerts
	I1209 02:37:38.433106  321981 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:37:38.433164  321981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-933067
	I1209 02:37:38.456313  321981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/kindnet-933067/id_rsa Username:docker}
	I1209 02:37:38.560832  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:37:38.583473  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1209 02:37:38.603273  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 02:37:38.621414  321981 provision.go:87] duration metric: took 235.893524ms to configureAuth
	I1209 02:37:38.621441  321981 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:37:38.621657  321981 config.go:182] Loaded profile config "kindnet-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:38.621779  321981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-933067
	I1209 02:37:38.640303  321981 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:38.640578  321981 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1209 02:37:38.640602  321981 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:37:38.926074  321981 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:37:38.926099  321981 machine.go:97] duration metric: took 1.013894106s to provisionDockerMachine
	I1209 02:37:38.926112  321981 client.go:176] duration metric: took 6.388551828s to LocalClient.Create
	I1209 02:37:38.926124  321981 start.go:167] duration metric: took 6.388616549s to libmachine.API.Create "kindnet-933067"
	I1209 02:37:38.926165  321981 start.go:293] postStartSetup for "kindnet-933067" (driver="docker")
	I1209 02:37:38.926198  321981 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:37:38.926287  321981 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:37:38.926337  321981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-933067
	I1209 02:37:38.947146  321981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/kindnet-933067/id_rsa Username:docker}
	I1209 02:37:39.065788  321981 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:37:39.069725  321981 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:37:39.069767  321981 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:37:39.069777  321981 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:37:39.069821  321981 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:37:39.069889  321981 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:37:39.069972  321981 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:37:39.078190  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:39.099616  321981 start.go:296] duration metric: took 173.422342ms for postStartSetup
	I1209 02:37:39.100019  321981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-933067
	I1209 02:37:39.120360  321981 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/config.json ...
	I1209 02:37:39.120619  321981 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:37:39.120688  321981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-933067
	I1209 02:37:39.140325  321981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/kindnet-933067/id_rsa Username:docker}
	I1209 02:37:39.238088  321981 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:37:39.244076  321981 start.go:128] duration metric: took 6.708313723s to createHost
	I1209 02:37:39.244104  321981 start.go:83] releasing machines lock for "kindnet-933067", held for 6.708518017s
	I1209 02:37:39.244174  321981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-933067
	I1209 02:37:39.264575  321981 ssh_runner.go:195] Run: cat /version.json
	I1209 02:37:39.264598  321981 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:37:39.264652  321981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-933067
	I1209 02:37:39.264701  321981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-933067
	I1209 02:37:39.286160  321981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/kindnet-933067/id_rsa Username:docker}
	I1209 02:37:39.286758  321981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/kindnet-933067/id_rsa Username:docker}
	I1209 02:37:39.434695  321981 ssh_runner.go:195] Run: systemctl --version
	I1209 02:37:39.441493  321981 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:37:39.481032  321981 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:37:39.485685  321981 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:37:39.485748  321981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:37:39.511975  321981 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 02:37:39.512002  321981 start.go:496] detecting cgroup driver to use...
	I1209 02:37:39.512035  321981 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:37:39.512080  321981 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:37:39.530962  321981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:37:39.544410  321981 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:37:39.544463  321981 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:37:39.561971  321981 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:37:39.580427  321981 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:37:39.679998  321981 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:37:39.801454  321981 docker.go:234] disabling docker service ...
	I1209 02:37:39.801523  321981 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:37:39.822368  321981 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:37:39.837368  321981 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:37:39.931009  321981 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:37:40.033029  321981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:37:40.046029  321981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:37:40.059814  321981 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:37:40.059866  321981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:40.072370  321981 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:37:40.072434  321981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:40.080853  321981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:40.089424  321981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:40.097725  321981 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:37:40.105324  321981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:40.113796  321981 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:40.126800  321981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:40.135230  321981 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:37:40.142856  321981 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:37:40.150007  321981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:40.256902  321981 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:37:42.315722  321981 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.058781907s)
	I1209 02:37:42.315758  321981 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:37:42.315806  321981 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:37:42.320053  321981 start.go:564] Will wait 60s for crictl version
	I1209 02:37:42.320102  321981 ssh_runner.go:195] Run: which crictl
	I1209 02:37:42.324216  321981 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:37:42.351304  321981 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:37:42.351379  321981 ssh_runner.go:195] Run: crio --version
	I1209 02:37:42.382447  321981 ssh_runner.go:195] Run: crio --version
	I1209 02:37:42.422835  321981 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 02:37:38.507180  325211 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:37:38.507472  325211 start.go:159] libmachine.API.Create for "calico-933067" (driver="docker")
	I1209 02:37:38.507515  325211 client.go:173] LocalClient.Create starting
	I1209 02:37:38.507578  325211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:37:38.507646  325211 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:38.507669  325211 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:38.507734  325211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:37:38.507758  325211 main.go:143] libmachine: Decoding PEM data...
	I1209 02:37:38.507773  325211 main.go:143] libmachine: Parsing certificate...
	I1209 02:37:38.508288  325211 cli_runner.go:164] Run: docker network inspect calico-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:37:38.530420  325211 cli_runner.go:211] docker network inspect calico-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:37:38.530498  325211 network_create.go:284] running [docker network inspect calico-933067] to gather additional debugging logs...
	I1209 02:37:38.530520  325211 cli_runner.go:164] Run: docker network inspect calico-933067
	W1209 02:37:38.547322  325211 cli_runner.go:211] docker network inspect calico-933067 returned with exit code 1
	I1209 02:37:38.547350  325211 network_create.go:287] error running [docker network inspect calico-933067]: docker network inspect calico-933067: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-933067 not found
	I1209 02:37:38.547375  325211 network_create.go:289] output of [docker network inspect calico-933067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-933067 not found
	
	** /stderr **
	I1209 02:37:38.547503  325211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:38.569221  325211 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:37:38.570208  325211 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:37:38.570926  325211 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:37:38.571577  325211 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8c401fbcec20 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:25:8f:63:1a:88} reservation:<nil>}
	I1209 02:37:38.572155  325211 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-d98c13d96c5c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ae:97:5b:a8:62:15} reservation:<nil>}
	I1209 02:37:38.572581  325211 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65c970efd44f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:7e:8e:00:ff:ef:6f} reservation:<nil>}
	I1209 02:37:38.573449  325211 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f82a80}
	I1209 02:37:38.573472  325211 network_create.go:124] attempt to create docker network calico-933067 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1209 02:37:38.573511  325211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-933067 calico-933067
	I1209 02:37:38.627894  325211 network_create.go:108] docker network calico-933067 192.168.103.0/24 created
	I1209 02:37:38.627922  325211 kic.go:121] calculated static IP "192.168.103.2" for the "calico-933067" container
	I1209 02:37:38.627976  325211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:37:38.646556  325211 cli_runner.go:164] Run: docker volume create calico-933067 --label name.minikube.sigs.k8s.io=calico-933067 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:37:38.664484  325211 oci.go:103] Successfully created a docker volume calico-933067
	I1209 02:37:38.664555  325211 cli_runner.go:164] Run: docker run --rm --name calico-933067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-933067 --entrypoint /usr/bin/test -v calico-933067:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:37:39.058000  325211 oci.go:107] Successfully prepared a docker volume calico-933067
	I1209 02:37:39.058074  325211 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:39.058086  325211 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 02:37:39.058148  325211 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-933067:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 02:37:42.205495  325211 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-933067:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.14730182s)
	I1209 02:37:42.205537  325211 kic.go:203] duration metric: took 3.147446687s to extract preloaded images to volume ...
	W1209 02:37:42.205627  325211 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:37:42.205679  325211 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:37:42.205727  325211 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:37:42.268249  325211 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-933067 --name calico-933067 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-933067 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-933067 --network calico-933067 --ip 192.168.103.2 --volume calico-933067:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 02:37:42.569831  325211 cli_runner.go:164] Run: docker container inspect calico-933067 --format={{.State.Running}}
	I1209 02:37:42.591216  325211 cli_runner.go:164] Run: docker container inspect calico-933067 --format={{.State.Status}}
	I1209 02:37:42.611412  325211 cli_runner.go:164] Run: docker exec calico-933067 stat /var/lib/dpkg/alternatives/iptables
	I1209 02:37:42.658856  325211 oci.go:144] the created container "calico-933067" has a running status.
	I1209 02:37:42.658883  325211 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/calico-933067/id_rsa...
	I1209 02:37:42.754654  325211 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/calico-933067/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 02:37:42.785154  325211 cli_runner.go:164] Run: docker container inspect calico-933067 --format={{.State.Status}}
	I1209 02:37:42.810359  325211 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 02:37:42.810384  325211 kic_runner.go:114] Args: [docker exec --privileged calico-933067 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 02:37:42.884775  325211 cli_runner.go:164] Run: docker container inspect calico-933067 --format={{.State.Status}}
	I1209 02:37:42.912480  325211 machine.go:94] provisionDockerMachine start ...
	I1209 02:37:42.912710  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-933067
	I1209 02:37:42.938335  325211 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:42.938708  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1209 02:37:42.938730  325211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:37:42.939715  325211 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 02:37:41.702808  319017 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 02:37:42.493263  319017 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 02:37:42.493471  319017 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 02:37:42.602288  319017 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 02:37:42.848108  319017 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 02:37:43.335913  319017 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 02:37:43.494872  319017 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 02:37:43.761296  319017 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 02:37:43.761823  319017 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 02:37:43.765574  319017 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 02:37:42.424007  321981 cli_runner.go:164] Run: docker network inspect kindnet-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:42.447092  321981 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 02:37:42.452020  321981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:42.463286  321981 kubeadm.go:884] updating cluster {Name:kindnet-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:37:42.463540  321981 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:42.463584  321981 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:42.499105  321981 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:42.499125  321981 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:37:42.499173  321981 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:42.528508  321981 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:42.528526  321981 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:37:42.528535  321981 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1209 02:37:42.528649  321981 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-933067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1209 02:37:42.528730  321981 ssh_runner.go:195] Run: crio config
	I1209 02:37:42.579626  321981 cni.go:84] Creating CNI manager for "kindnet"
	I1209 02:37:42.579692  321981 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:37:42.579720  321981 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-933067 NodeName:kindnet-933067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:37:42.579891  321981 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-933067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:37:42.579956  321981 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 02:37:42.589687  321981 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:37:42.589749  321981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:37:42.598352  321981 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1209 02:37:42.612455  321981 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:37:42.630110  321981 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1209 02:37:42.645188  321981 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:37:42.648814  321981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:42.660054  321981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:42.775622  321981 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:37:42.794425  321981 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067 for IP: 192.168.76.2
	I1209 02:37:42.794445  321981 certs.go:195] generating shared ca certs ...
	I1209 02:37:42.794464  321981 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:42.794656  321981 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:37:42.794739  321981 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:37:42.794755  321981 certs.go:257] generating profile certs ...
	I1209 02:37:42.794834  321981 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/client.key
	I1209 02:37:42.794857  321981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/client.crt with IP's: []
	I1209 02:37:43.001456  321981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/client.crt ...
	I1209 02:37:43.001486  321981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/client.crt: {Name:mk83b2c6bf85a41265f27798758e80adc53b6776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:43.001630  321981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/client.key ...
	I1209 02:37:43.001659  321981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/client.key: {Name:mk68b389e427831aafc8a0307475fa851642287d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:43.001761  321981 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.key.84c773a7
	I1209 02:37:43.001786  321981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.crt.84c773a7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1209 02:37:43.065871  321981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.crt.84c773a7 ...
	I1209 02:37:43.065896  321981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.crt.84c773a7: {Name:mk33a344aef4dbcc0b2f53b06f1e76f9a52a09b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:43.066033  321981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.key.84c773a7 ...
	I1209 02:37:43.066045  321981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.key.84c773a7: {Name:mk302ae81ca3886d8e03a46813f0d77faffcbac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:43.066115  321981 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.crt.84c773a7 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.crt
	I1209 02:37:43.066183  321981 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.key.84c773a7 -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.key
	I1209 02:37:43.066236  321981 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/proxy-client.key
	I1209 02:37:43.066250  321981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/proxy-client.crt with IP's: []
	I1209 02:37:43.175962  321981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/proxy-client.crt ...
	I1209 02:37:43.175989  321981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/proxy-client.crt: {Name:mkcd0c67078112caaf1fe8d572e0bc3a4553363d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:43.176141  321981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/proxy-client.key ...
	I1209 02:37:43.176157  321981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/proxy-client.key: {Name:mk958ad816f48d07c287ce8a125d27cc8e57faf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:43.176336  321981 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:37:43.176375  321981 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:37:43.176386  321981 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:37:43.176410  321981 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:37:43.176433  321981 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:37:43.176457  321981 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:37:43.176521  321981 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:43.177095  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:37:43.195689  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:37:43.213452  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:37:43.230821  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:37:43.247522  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 02:37:43.263869  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:37:43.280883  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:37:43.298921  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kindnet-933067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:37:43.316895  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:37:43.334444  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:37:43.350609  321981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:37:43.366889  321981 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:37:43.378648  321981 ssh_runner.go:195] Run: openssl version
	I1209 02:37:43.384407  321981 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:37:43.391102  321981 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:37:43.398075  321981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:37:43.401439  321981 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:37:43.401478  321981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:37:43.437326  321981 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:43.444955  321981 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145522.pem /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:43.452038  321981 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:43.459146  321981 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:37:43.467552  321981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:43.471489  321981 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:43.471537  321981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:43.504960  321981 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:37:43.512268  321981 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 02:37:43.519323  321981 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:37:43.526158  321981 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:37:43.534980  321981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:37:43.538574  321981 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:37:43.538621  321981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:37:43.577608  321981 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:37:43.585603  321981 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/51391683.0
	I1209 02:37:43.592989  321981 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:37:43.596351  321981 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 02:37:43.596411  321981 kubeadm.go:401] StartCluster: {Name:kindnet-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:43.596497  321981 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:37:43.596534  321981 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:37:43.626198  321981 cri.go:89] found id: ""
	I1209 02:37:43.626253  321981 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:37:43.633537  321981 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 02:37:43.640849  321981 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 02:37:43.640892  321981 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 02:37:43.648114  321981 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 02:37:43.648132  321981 kubeadm.go:158] found existing configuration files:
	
	I1209 02:37:43.648170  321981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 02:37:43.655198  321981 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 02:37:43.655249  321981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 02:37:43.662628  321981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 02:37:43.669757  321981 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 02:37:43.669810  321981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 02:37:43.677366  321981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 02:37:43.684944  321981 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 02:37:43.684991  321981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 02:37:43.692048  321981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 02:37:43.699945  321981 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 02:37:43.699996  321981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 02:37:43.707458  321981 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 02:37:43.744691  321981 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 02:37:43.744763  321981 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 02:37:43.763312  321981 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 02:37:43.763426  321981 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 02:37:43.763490  321981 kubeadm.go:319] OS: Linux
	I1209 02:37:43.763535  321981 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 02:37:43.763616  321981 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 02:37:43.763730  321981 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 02:37:43.763807  321981 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 02:37:43.763884  321981 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 02:37:43.763954  321981 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 02:37:43.764061  321981 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 02:37:43.764132  321981 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 02:37:43.824748  321981 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 02:37:43.824904  321981 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 02:37:43.825028  321981 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 02:37:43.834139  321981 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 02:37:42.514223  312861 node_ready.go:49] node "embed-certs-485234" is "Ready"
	I1209 02:37:42.514246  312861 node_ready.go:38] duration metric: took 11.004147189s for node "embed-certs-485234" to be "Ready" ...
	I1209 02:37:42.514258  312861 api_server.go:52] waiting for apiserver process to appear ...
	I1209 02:37:42.514365  312861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:37:42.528492  312861 api_server.go:72] duration metric: took 11.343868983s to wait for apiserver process to appear ...
	I1209 02:37:42.528516  312861 api_server.go:88] waiting for apiserver healthz status ...
	I1209 02:37:42.528536  312861 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1209 02:37:42.532960  312861 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1209 02:37:42.534144  312861 api_server.go:141] control plane version: v1.34.2
	I1209 02:37:42.534171  312861 api_server.go:131] duration metric: took 5.647872ms to wait for apiserver health ...
	I1209 02:37:42.534182  312861 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 02:37:42.537569  312861 system_pods.go:59] 8 kube-system pods found
	I1209 02:37:42.537600  312861 system_pods.go:61] "coredns-66bc5c9577-sk4dm" [8bc9e893-f0f2-4783-8ded-7fd6e4cd1785] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:37:42.537606  312861 system_pods.go:61] "etcd-embed-certs-485234" [e0242c54-b2ac-43c2-9f01-3b7f9ea9e92e] Running
	I1209 02:37:42.537613  312861 system_pods.go:61] "kindnet-m72mz" [f5bc8f03-4058-446e-9c8b-af2472536ab6] Running
	I1209 02:37:42.537617  312861 system_pods.go:61] "kube-apiserver-embed-certs-485234" [87d3e463-d44f-46c2-ae7a-2d64bbe25219] Running
	I1209 02:37:42.537622  312861 system_pods.go:61] "kube-controller-manager-embed-certs-485234" [9076d8b6-0cad-40c2-b1ba-cde515336b9d] Running
	I1209 02:37:42.537626  312861 system_pods.go:61] "kube-proxy-ldzjl" [5960df0e-74d0-4df0-a55b-e02828d2b755] Running
	I1209 02:37:42.537629  312861 system_pods.go:61] "kube-scheduler-embed-certs-485234" [a9a4f9f4-4855-4493-bfc7-28fd78d8895b] Running
	I1209 02:37:42.537663  312861 system_pods.go:61] "storage-provisioner" [1c12b52a-cef6-4eb8-adcf-d4a09a3c46a2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:37:42.537675  312861 system_pods.go:74] duration metric: took 3.485387ms to wait for pod list to return data ...
	I1209 02:37:42.537683  312861 default_sa.go:34] waiting for default service account to be created ...
	I1209 02:37:42.540438  312861 default_sa.go:45] found service account: "default"
	I1209 02:37:42.540463  312861 default_sa.go:55] duration metric: took 2.747078ms for default service account to be created ...
	I1209 02:37:42.540473  312861 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 02:37:42.544905  312861 system_pods.go:86] 8 kube-system pods found
	I1209 02:37:42.544937  312861 system_pods.go:89] "coredns-66bc5c9577-sk4dm" [8bc9e893-f0f2-4783-8ded-7fd6e4cd1785] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:37:42.544946  312861 system_pods.go:89] "etcd-embed-certs-485234" [e0242c54-b2ac-43c2-9f01-3b7f9ea9e92e] Running
	I1209 02:37:42.544955  312861 system_pods.go:89] "kindnet-m72mz" [f5bc8f03-4058-446e-9c8b-af2472536ab6] Running
	I1209 02:37:42.544960  312861 system_pods.go:89] "kube-apiserver-embed-certs-485234" [87d3e463-d44f-46c2-ae7a-2d64bbe25219] Running
	I1209 02:37:42.544966  312861 system_pods.go:89] "kube-controller-manager-embed-certs-485234" [9076d8b6-0cad-40c2-b1ba-cde515336b9d] Running
	I1209 02:37:42.544975  312861 system_pods.go:89] "kube-proxy-ldzjl" [5960df0e-74d0-4df0-a55b-e02828d2b755] Running
	I1209 02:37:42.544980  312861 system_pods.go:89] "kube-scheduler-embed-certs-485234" [a9a4f9f4-4855-4493-bfc7-28fd78d8895b] Running
	I1209 02:37:42.544988  312861 system_pods.go:89] "storage-provisioner" [1c12b52a-cef6-4eb8-adcf-d4a09a3c46a2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:37:42.545011  312861 retry.go:31] will retry after 208.849544ms: missing components: kube-dns
	I1209 02:37:42.759144  312861 system_pods.go:86] 8 kube-system pods found
	I1209 02:37:42.759180  312861 system_pods.go:89] "coredns-66bc5c9577-sk4dm" [8bc9e893-f0f2-4783-8ded-7fd6e4cd1785] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:37:42.759188  312861 system_pods.go:89] "etcd-embed-certs-485234" [e0242c54-b2ac-43c2-9f01-3b7f9ea9e92e] Running
	I1209 02:37:42.759196  312861 system_pods.go:89] "kindnet-m72mz" [f5bc8f03-4058-446e-9c8b-af2472536ab6] Running
	I1209 02:37:42.759249  312861 system_pods.go:89] "kube-apiserver-embed-certs-485234" [87d3e463-d44f-46c2-ae7a-2d64bbe25219] Running
	I1209 02:37:42.759258  312861 system_pods.go:89] "kube-controller-manager-embed-certs-485234" [9076d8b6-0cad-40c2-b1ba-cde515336b9d] Running
	I1209 02:37:42.759263  312861 system_pods.go:89] "kube-proxy-ldzjl" [5960df0e-74d0-4df0-a55b-e02828d2b755] Running
	I1209 02:37:42.759268  312861 system_pods.go:89] "kube-scheduler-embed-certs-485234" [a9a4f9f4-4855-4493-bfc7-28fd78d8895b] Running
	I1209 02:37:42.759275  312861 system_pods.go:89] "storage-provisioner" [1c12b52a-cef6-4eb8-adcf-d4a09a3c46a2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:37:42.759322  312861 retry.go:31] will retry after 250.368537ms: missing components: kube-dns
	I1209 02:37:43.015620  312861 system_pods.go:86] 8 kube-system pods found
	I1209 02:37:43.015686  312861 system_pods.go:89] "coredns-66bc5c9577-sk4dm" [8bc9e893-f0f2-4783-8ded-7fd6e4cd1785] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:37:43.015696  312861 system_pods.go:89] "etcd-embed-certs-485234" [e0242c54-b2ac-43c2-9f01-3b7f9ea9e92e] Running
	I1209 02:37:43.015710  312861 system_pods.go:89] "kindnet-m72mz" [f5bc8f03-4058-446e-9c8b-af2472536ab6] Running
	I1209 02:37:43.015716  312861 system_pods.go:89] "kube-apiserver-embed-certs-485234" [87d3e463-d44f-46c2-ae7a-2d64bbe25219] Running
	I1209 02:37:43.015723  312861 system_pods.go:89] "kube-controller-manager-embed-certs-485234" [9076d8b6-0cad-40c2-b1ba-cde515336b9d] Running
	I1209 02:37:43.015728  312861 system_pods.go:89] "kube-proxy-ldzjl" [5960df0e-74d0-4df0-a55b-e02828d2b755] Running
	I1209 02:37:43.015733  312861 system_pods.go:89] "kube-scheduler-embed-certs-485234" [a9a4f9f4-4855-4493-bfc7-28fd78d8895b] Running
	I1209 02:37:43.015755  312861 system_pods.go:89] "storage-provisioner" [1c12b52a-cef6-4eb8-adcf-d4a09a3c46a2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:37:43.015778  312861 retry.go:31] will retry after 415.052548ms: missing components: kube-dns
	I1209 02:37:43.434600  312861 system_pods.go:86] 8 kube-system pods found
	I1209 02:37:43.434629  312861 system_pods.go:89] "coredns-66bc5c9577-sk4dm" [8bc9e893-f0f2-4783-8ded-7fd6e4cd1785] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:37:43.434661  312861 system_pods.go:89] "etcd-embed-certs-485234" [e0242c54-b2ac-43c2-9f01-3b7f9ea9e92e] Running
	I1209 02:37:43.434668  312861 system_pods.go:89] "kindnet-m72mz" [f5bc8f03-4058-446e-9c8b-af2472536ab6] Running
	I1209 02:37:43.434674  312861 system_pods.go:89] "kube-apiserver-embed-certs-485234" [87d3e463-d44f-46c2-ae7a-2d64bbe25219] Running
	I1209 02:37:43.434680  312861 system_pods.go:89] "kube-controller-manager-embed-certs-485234" [9076d8b6-0cad-40c2-b1ba-cde515336b9d] Running
	I1209 02:37:43.434685  312861 system_pods.go:89] "kube-proxy-ldzjl" [5960df0e-74d0-4df0-a55b-e02828d2b755] Running
	I1209 02:37:43.434690  312861 system_pods.go:89] "kube-scheduler-embed-certs-485234" [a9a4f9f4-4855-4493-bfc7-28fd78d8895b] Running
	I1209 02:37:43.434698  312861 system_pods.go:89] "storage-provisioner" [1c12b52a-cef6-4eb8-adcf-d4a09a3c46a2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:37:43.434716  312861 retry.go:31] will retry after 489.48227ms: missing components: kube-dns
	I1209 02:37:43.927992  312861 system_pods.go:86] 8 kube-system pods found
	I1209 02:37:43.928023  312861 system_pods.go:89] "coredns-66bc5c9577-sk4dm" [8bc9e893-f0f2-4783-8ded-7fd6e4cd1785] Running
	I1209 02:37:43.928029  312861 system_pods.go:89] "etcd-embed-certs-485234" [e0242c54-b2ac-43c2-9f01-3b7f9ea9e92e] Running
	I1209 02:37:43.928035  312861 system_pods.go:89] "kindnet-m72mz" [f5bc8f03-4058-446e-9c8b-af2472536ab6] Running
	I1209 02:37:43.928039  312861 system_pods.go:89] "kube-apiserver-embed-certs-485234" [87d3e463-d44f-46c2-ae7a-2d64bbe25219] Running
	I1209 02:37:43.928042  312861 system_pods.go:89] "kube-controller-manager-embed-certs-485234" [9076d8b6-0cad-40c2-b1ba-cde515336b9d] Running
	I1209 02:37:43.928050  312861 system_pods.go:89] "kube-proxy-ldzjl" [5960df0e-74d0-4df0-a55b-e02828d2b755] Running
	I1209 02:37:43.928053  312861 system_pods.go:89] "kube-scheduler-embed-certs-485234" [a9a4f9f4-4855-4493-bfc7-28fd78d8895b] Running
	I1209 02:37:43.928056  312861 system_pods.go:89] "storage-provisioner" [1c12b52a-cef6-4eb8-adcf-d4a09a3c46a2] Running
	I1209 02:37:43.928065  312861 system_pods.go:126] duration metric: took 1.387585085s to wait for k8s-apps to be running ...
	I1209 02:37:43.928075  312861 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 02:37:43.928120  312861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:37:43.941401  312861 system_svc.go:56] duration metric: took 13.317638ms WaitForService to wait for kubelet
	I1209 02:37:43.941428  312861 kubeadm.go:587] duration metric: took 12.756808749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:37:43.941454  312861 node_conditions.go:102] verifying NodePressure condition ...
	I1209 02:37:43.943989  312861 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 02:37:43.944013  312861 node_conditions.go:123] node cpu capacity is 8
	I1209 02:37:43.944033  312861 node_conditions.go:105] duration metric: took 2.572247ms to run NodePressure ...
	I1209 02:37:43.944047  312861 start.go:242] waiting for startup goroutines ...
	I1209 02:37:43.944061  312861 start.go:247] waiting for cluster config update ...
	I1209 02:37:43.944079  312861 start.go:256] writing updated cluster config ...
	I1209 02:37:43.944336  312861 ssh_runner.go:195] Run: rm -f paused
	I1209 02:37:43.947893  312861 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:37:43.951247  312861 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sk4dm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:43.955117  312861 pod_ready.go:94] pod "coredns-66bc5c9577-sk4dm" is "Ready"
	I1209 02:37:43.955140  312861 pod_ready.go:86] duration metric: took 3.869286ms for pod "coredns-66bc5c9577-sk4dm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:43.956978  312861 pod_ready.go:83] waiting for pod "etcd-embed-certs-485234" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:43.961758  312861 pod_ready.go:94] pod "etcd-embed-certs-485234" is "Ready"
	I1209 02:37:43.961776  312861 pod_ready.go:86] duration metric: took 4.779073ms for pod "etcd-embed-certs-485234" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:43.963601  312861 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-485234" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:43.966975  312861 pod_ready.go:94] pod "kube-apiserver-embed-certs-485234" is "Ready"
	I1209 02:37:43.966990  312861 pod_ready.go:86] duration metric: took 3.371502ms for pod "kube-apiserver-embed-certs-485234" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:43.968519  312861 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-485234" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:44.352521  312861 pod_ready.go:94] pod "kube-controller-manager-embed-certs-485234" is "Ready"
	I1209 02:37:44.352544  312861 pod_ready.go:86] duration metric: took 384.008179ms for pod "kube-controller-manager-embed-certs-485234" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:44.552439  312861 pod_ready.go:83] waiting for pod "kube-proxy-ldzjl" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:44.951735  312861 pod_ready.go:94] pod "kube-proxy-ldzjl" is "Ready"
	I1209 02:37:44.951765  312861 pod_ready.go:86] duration metric: took 399.302612ms for pod "kube-proxy-ldzjl" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:45.153154  312861 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-485234" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:45.552326  312861 pod_ready.go:94] pod "kube-scheduler-embed-certs-485234" is "Ready"
	I1209 02:37:45.552354  312861 pod_ready.go:86] duration metric: took 399.173417ms for pod "kube-scheduler-embed-certs-485234" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 02:37:45.552369  312861 pod_ready.go:40] duration metric: took 1.604447184s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 02:37:45.604661  312861 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 02:37:45.609733  312861 out.go:179] * Done! kubectl is now configured to use "embed-certs-485234" cluster and "default" namespace by default
	I1209 02:37:43.767040  319017 out.go:252]   - Booting up control plane ...
	I1209 02:37:43.767164  319017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 02:37:43.767278  319017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 02:37:43.768098  319017 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 02:37:43.783044  319017 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 02:37:43.783187  319017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 02:37:43.789972  319017 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 02:37:43.790328  319017 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 02:37:43.790389  319017 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 02:37:43.892326  319017 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 02:37:43.892481  319017 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 02:37:44.393995  319017 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.802982ms
	I1209 02:37:44.397118  319017 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 02:37:44.397249  319017 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1209 02:37:44.397369  319017 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 02:37:44.397471  319017 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 02:37:45.942145  319017 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.544944524s
	I1209 02:37:43.836001  321981 out.go:252]   - Generating certificates and keys ...
	I1209 02:37:43.836089  321981 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 02:37:43.836212  321981 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 02:37:43.984831  321981 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 02:37:44.046271  321981 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 02:37:44.187838  321981 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 02:37:44.509658  321981 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 02:37:44.879567  321981 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 02:37:44.879785  321981 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-933067 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 02:37:45.011031  321981 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 02:37:45.011216  321981 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-933067 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 02:37:45.434282  321981 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 02:37:45.660985  321981 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 02:37:45.851667  321981 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 02:37:45.851770  321981 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 02:37:46.183910  321981 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 02:37:46.602731  321981 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 02:37:46.986353  321981 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 02:37:47.235089  321981 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 02:37:47.322556  321981 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 02:37:47.323202  321981 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 02:37:47.327774  321981 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 02:37:46.080386  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-933067
	
	I1209 02:37:46.080414  325211 ubuntu.go:182] provisioning hostname "calico-933067"
	I1209 02:37:46.080477  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-933067
	I1209 02:37:46.102909  325211 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:46.103241  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1209 02:37:46.103285  325211 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-933067 && echo "calico-933067" | sudo tee /etc/hostname
	I1209 02:37:46.256709  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-933067
	
	I1209 02:37:46.256814  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-933067
	I1209 02:37:46.277273  325211 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:46.277608  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1209 02:37:46.277661  325211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-933067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-933067/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-933067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:37:46.418111  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:37:46.418141  325211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:37:46.418184  325211 ubuntu.go:190] setting up certificates
	I1209 02:37:46.418204  325211 provision.go:84] configureAuth start
	I1209 02:37:46.418259  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-933067
	I1209 02:37:46.441263  325211 provision.go:143] copyHostCerts
	I1209 02:37:46.441335  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:37:46.441353  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:37:46.441434  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:37:46.441563  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:37:46.441576  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:37:46.441620  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:37:46.441739  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:37:46.441752  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:37:46.441791  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:37:46.441874  325211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.calico-933067 san=[127.0.0.1 192.168.103.2 calico-933067 localhost minikube]
	I1209 02:37:46.512797  325211 provision.go:177] copyRemoteCerts
	I1209 02:37:46.512857  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:37:46.512906  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-933067
	I1209 02:37:46.533407  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/calico-933067/id_rsa Username:docker}
	I1209 02:37:46.629872  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:37:46.651220  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 02:37:46.670985  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 02:37:46.700362  325211 provision.go:87] duration metric: took 282.143631ms to configureAuth
	I1209 02:37:46.700392  325211 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:37:46.700579  325211 config.go:182] Loaded profile config "calico-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:37:46.700714  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-933067
	I1209 02:37:46.742355  325211 main.go:143] libmachine: Using SSH client type: native
	I1209 02:37:46.743126  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1209 02:37:46.743266  325211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:37:47.055774  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:37:47.055814  325211 machine.go:97] duration metric: took 4.143302766s to provisionDockerMachine
	I1209 02:37:47.055825  325211 client.go:176] duration metric: took 8.548302957s to LocalClient.Create
	I1209 02:37:47.055843  325211 start.go:167] duration metric: took 8.548372851s to libmachine.API.Create "calico-933067"
	I1209 02:37:47.055851  325211 start.go:293] postStartSetup for "calico-933067" (driver="docker")
	I1209 02:37:47.055864  325211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:37:47.055934  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:37:47.055982  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-933067
	I1209 02:37:47.078372  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/calico-933067/id_rsa Username:docker}
	I1209 02:37:47.175932  325211 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:37:47.179303  325211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:37:47.179326  325211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:37:47.179336  325211 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:37:47.179382  325211 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:37:47.179466  325211 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:37:47.179557  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:37:47.186824  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:47.205592  325211 start.go:296] duration metric: took 149.729218ms for postStartSetup
	I1209 02:37:47.206004  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-933067
	I1209 02:37:47.224966  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/config.json ...
	I1209 02:37:47.225189  325211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:37:47.225227  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-933067
	I1209 02:37:47.243152  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/calico-933067/id_rsa Username:docker}
	I1209 02:37:47.331939  325211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:37:47.336331  325211 start.go:128] duration metric: took 8.831107296s to createHost
	I1209 02:37:47.336352  325211 start.go:83] releasing machines lock for "calico-933067", held for 8.831256403s
	I1209 02:37:47.336422  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-933067
	I1209 02:37:47.354732  325211 ssh_runner.go:195] Run: cat /version.json
	I1209 02:37:47.354771  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-933067
	I1209 02:37:47.354843  325211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:37:47.354921  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-933067
	I1209 02:37:47.375314  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/calico-933067/id_rsa Username:docker}
	I1209 02:37:47.375439  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/calico-933067/id_rsa Username:docker}
	I1209 02:37:47.469513  325211 ssh_runner.go:195] Run: systemctl --version
	I1209 02:37:47.538340  325211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:37:47.576462  325211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:37:47.581173  325211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:37:47.581315  325211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:37:47.606464  325211 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 02:37:47.606484  325211 start.go:496] detecting cgroup driver to use...
	I1209 02:37:47.606519  325211 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:37:47.606556  325211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:37:47.622241  325211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:37:47.633558  325211 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:37:47.633606  325211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:37:47.648869  325211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:37:47.664787  325211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:37:47.745580  325211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:37:47.839057  325211 docker.go:234] disabling docker service ...
	I1209 02:37:47.839131  325211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:37:47.860397  325211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:37:47.876143  325211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:37:47.977483  325211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:37:48.077792  325211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:37:48.092474  325211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:37:48.108277  325211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:37:48.108348  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:48.119580  325211 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:37:48.119664  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:48.129805  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:48.139960  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:48.150068  325211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:37:48.160499  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:48.181224  325211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:48.197350  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:37:48.207462  325211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:37:48.215833  325211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:37:48.224331  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:46.893510  319017 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.496356778s
	I1209 02:37:48.398935  319017 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001725926s
	I1209 02:37:48.415583  319017 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 02:37:48.429944  319017 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 02:37:48.442013  319017 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 02:37:48.442304  319017 kubeadm.go:319] [mark-control-plane] Marking the node auto-933067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 02:37:48.449311  319017 kubeadm.go:319] [bootstrap-token] Using token: yyml2z.kd7b7les2i5bxrhx
	I1209 02:37:48.325387  325211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:37:48.479566  325211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:37:48.479626  325211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:37:48.483519  325211 start.go:564] Will wait 60s for crictl version
	I1209 02:37:48.483569  325211 ssh_runner.go:195] Run: which crictl
	I1209 02:37:48.487400  325211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:37:48.517778  325211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:37:48.517865  325211 ssh_runner.go:195] Run: crio --version
	I1209 02:37:48.547309  325211 ssh_runner.go:195] Run: crio --version
	I1209 02:37:48.575271  325211 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 02:37:48.450577  319017 out.go:252]   - Configuring RBAC rules ...
	I1209 02:37:48.450745  319017 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 02:37:48.453504  319017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 02:37:48.458553  319017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 02:37:48.460993  319017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 02:37:48.463435  319017 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 02:37:48.466558  319017 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 02:37:48.805795  319017 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 02:37:49.225223  319017 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 02:37:49.806230  319017 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 02:37:49.806971  319017 kubeadm.go:319] 
	I1209 02:37:49.807118  319017 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 02:37:49.807148  319017 kubeadm.go:319] 
	I1209 02:37:49.807276  319017 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 02:37:49.807287  319017 kubeadm.go:319] 
	I1209 02:37:49.807328  319017 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 02:37:49.807409  319017 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 02:37:49.807475  319017 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 02:37:49.807486  319017 kubeadm.go:319] 
	I1209 02:37:49.807554  319017 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 02:37:49.807565  319017 kubeadm.go:319] 
	I1209 02:37:49.807628  319017 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 02:37:49.807654  319017 kubeadm.go:319] 
	I1209 02:37:49.807725  319017 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 02:37:49.807842  319017 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 02:37:49.807938  319017 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 02:37:49.807948  319017 kubeadm.go:319] 
	I1209 02:37:49.808096  319017 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 02:37:49.808211  319017 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 02:37:49.808225  319017 kubeadm.go:319] 
	I1209 02:37:49.808357  319017 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yyml2z.kd7b7les2i5bxrhx \
	I1209 02:37:49.808488  319017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf \
	I1209 02:37:49.808526  319017 kubeadm.go:319] 	--control-plane 
	I1209 02:37:49.808536  319017 kubeadm.go:319] 
	I1209 02:37:49.808676  319017 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 02:37:49.808687  319017 kubeadm.go:319] 
	I1209 02:37:49.808808  319017 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yyml2z.kd7b7les2i5bxrhx \
	I1209 02:37:49.808949  319017 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf 
	I1209 02:37:49.813050  319017 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1209 02:37:49.813194  319017 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 02:37:49.813223  319017 cni.go:84] Creating CNI manager for ""
	I1209 02:37:49.813232  319017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 02:37:49.814863  319017 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1209 02:37:48.576230  325211 cli_runner.go:164] Run: docker network inspect calico-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:37:48.593070  325211 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1209 02:37:48.596924  325211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:48.608157  325211 kubeadm.go:884] updating cluster {Name:calico-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 02:37:48.608266  325211 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:37:48.608310  325211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:48.639874  325211 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:48.639895  325211 crio.go:433] Images already preloaded, skipping extraction
	I1209 02:37:48.639933  325211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 02:37:48.664303  325211 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 02:37:48.664323  325211 cache_images.go:86] Images are preloaded, skipping loading
	I1209 02:37:48.664330  325211 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1209 02:37:48.664413  325211 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-933067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1209 02:37:48.664471  325211 ssh_runner.go:195] Run: crio config
	I1209 02:37:48.707044  325211 cni.go:84] Creating CNI manager for "calico"
	I1209 02:37:48.707090  325211 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 02:37:48.707115  325211 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-933067 NodeName:calico-933067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 02:37:48.707247  325211 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-933067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 02:37:48.707300  325211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 02:37:48.715126  325211 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 02:37:48.715185  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 02:37:48.722596  325211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1209 02:37:48.734613  325211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 02:37:48.748363  325211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1209 02:37:48.759721  325211 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1209 02:37:48.763014  325211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 02:37:48.772005  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:37:48.856185  325211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 02:37:48.889358  325211 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067 for IP: 192.168.103.2
	I1209 02:37:48.889382  325211 certs.go:195] generating shared ca certs ...
	I1209 02:37:48.889403  325211 certs.go:227] acquiring lock for ca certs: {Name:mk08a12a4ba2a08166ea6f2d3a696a32f698ce6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:48.889567  325211 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key
	I1209 02:37:48.889618  325211 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key
	I1209 02:37:48.889661  325211 certs.go:257] generating profile certs ...
	I1209 02:37:48.889726  325211 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/client.key
	I1209 02:37:48.889738  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/client.crt with IP's: []
	I1209 02:37:48.974567  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/client.crt ...
	I1209 02:37:48.974606  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/client.crt: {Name:mk8cf6baa2eba347c9270345380256536d601bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:48.974798  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/client.key ...
	I1209 02:37:48.974821  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/client.key: {Name:mk7d3950042b80e16402a95d1b6609b162d1904a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:48.974970  325211 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.key.317d5a1b
	I1209 02:37:48.974994  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.crt.317d5a1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1209 02:37:49.104136  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.crt.317d5a1b ...
	I1209 02:37:49.104180  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.crt.317d5a1b: {Name:mkc93c7c769e50aa428f67b9757ba729eecf66db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:49.104404  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.key.317d5a1b ...
	I1209 02:37:49.104429  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.key.317d5a1b: {Name:mke4e21583d8786f955205d48a1074e85be166d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:49.104553  325211 certs.go:382] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.crt.317d5a1b -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.crt
	I1209 02:37:49.104686  325211 certs.go:386] copying /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.key.317d5a1b -> /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.key
	I1209 02:37:49.104788  325211 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/proxy-client.key
	I1209 02:37:49.104814  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/proxy-client.crt with IP's: []
	I1209 02:37:49.172815  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/proxy-client.crt ...
	I1209 02:37:49.172840  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/proxy-client.crt: {Name:mk0dc287fe3608d446f0a4cad15a2a0e59c4bf5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:49.172984  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/proxy-client.key ...
	I1209 02:37:49.172992  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/proxy-client.key: {Name:mk8ad50cc6a6be9ae9e00b1ac6d2aa127f62e84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:37:49.173151  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem (1338 bytes)
	W1209 02:37:49.173184  325211 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552_empty.pem, impossibly tiny 0 bytes
	I1209 02:37:49.173191  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 02:37:49.173220  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem (1078 bytes)
	I1209 02:37:49.173240  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem (1123 bytes)
	I1209 02:37:49.173259  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem (1679 bytes)
	I1209 02:37:49.173295  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:37:49.173872  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 02:37:49.192665  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 02:37:49.214841  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 02:37:49.237906  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 02:37:49.255542  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 02:37:49.272274  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 02:37:49.289490  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 02:37:49.310476  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/calico-933067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 02:37:49.328179  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /usr/share/ca-certificates/145522.pem (1708 bytes)
	I1209 02:37:49.345993  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 02:37:49.363358  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1338 bytes)
	I1209 02:37:49.379592  325211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 02:37:49.391880  325211 ssh_runner.go:195] Run: openssl version
	I1209 02:37:49.397922  325211 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/145522.pem
	I1209 02:37:49.405628  325211 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/145522.pem /etc/ssl/certs/145522.pem
	I1209 02:37:49.412689  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145522.pem
	I1209 02:37:49.416202  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:03 /usr/share/ca-certificates/145522.pem
	I1209 02:37:49.416248  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145522.pem
	I1209 02:37:49.452515  325211 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:49.462235  325211 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/145522.pem /etc/ssl/certs/3ec20f2e.0
	I1209 02:37:49.474435  325211 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:49.481989  325211 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 02:37:49.489658  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:49.493467  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:49.493523  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 02:37:49.533960  325211 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 02:37:49.545663  325211 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 02:37:49.554320  325211 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14552.pem
	I1209 02:37:49.562313  325211 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem
	I1209 02:37:49.569727  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I1209 02:37:49.573860  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:03 /usr/share/ca-certificates/14552.pem
	I1209 02:37:49.573910  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I1209 02:37:49.622327  325211 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 02:37:49.631732  325211 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/51391683.0
	I1209 02:37:49.640710  325211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 02:37:49.644582  325211 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 02:37:49.644689  325211 kubeadm.go:401] StartCluster: {Name:calico-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:37:49.644768  325211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 02:37:49.644824  325211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 02:37:49.677016  325211 cri.go:89] found id: ""
	I1209 02:37:49.677082  325211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 02:37:49.687309  325211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 02:37:49.698410  325211 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 02:37:49.698479  325211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 02:37:49.709079  325211 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 02:37:49.709096  325211 kubeadm.go:158] found existing configuration files:
	
	I1209 02:37:49.709144  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 02:37:49.719478  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 02:37:49.719540  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 02:37:49.727890  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 02:37:49.736999  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 02:37:49.737057  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 02:37:49.744549  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 02:37:49.754817  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 02:37:49.754875  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 02:37:49.763075  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 02:37:49.771777  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 02:37:49.771828  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 02:37:49.779769  325211 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 02:37:49.826344  325211 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 02:37:49.826481  325211 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 02:37:49.853531  325211 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 02:37:49.853626  325211 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1209 02:37:49.853723  325211 kubeadm.go:319] OS: Linux
	I1209 02:37:49.853806  325211 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 02:37:49.853864  325211 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 02:37:49.853930  325211 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 02:37:49.853996  325211 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 02:37:49.854059  325211 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 02:37:49.854124  325211 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 02:37:49.854191  325211 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 02:37:49.854247  325211 kubeadm.go:319] CGROUPS_IO: enabled
	I1209 02:37:49.942790  325211 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 02:37:49.942924  325211 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 02:37:49.943062  325211 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 02:37:49.952495  325211 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 02:37:49.815852  319017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 02:37:49.820729  319017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1209 02:37:49.820748  319017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 02:37:49.836181  319017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 02:37:50.126555  319017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 02:37:50.126731  319017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-933067 minikube.k8s.io/updated_at=2025_12_09T02_37_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=auto-933067 minikube.k8s.io/primary=true
	I1209 02:37:50.126659  319017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:50.243139  319017 ops.go:34] apiserver oom_adj: -16
	I1209 02:37:50.243186  319017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:50.743775  319017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:51.244278  319017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 02:37:47.328968  321981 out.go:252]   - Booting up control plane ...
	I1209 02:37:47.329066  321981 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 02:37:47.329136  321981 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 02:37:47.329879  321981 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 02:37:47.343422  321981 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 02:37:47.343562  321981 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 02:37:47.350474  321981 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 02:37:47.350907  321981 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 02:37:47.351008  321981 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 02:37:47.459866  321981 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 02:37:47.460040  321981 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 02:37:48.961263  321981 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501344954s
	I1209 02:37:48.965442  321981 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 02:37:48.965843  321981 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1209 02:37:48.966156  321981 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 02:37:48.966334  321981 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 02:37:50.504752  321981 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.538881763s
	I1209 02:37:50.694988  321981 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.729439387s
	I1209 02:37:52.467111  321981 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501382804s
	I1209 02:37:52.484736  321981 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 02:37:52.494709  321981 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 02:37:52.503403  321981 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 02:37:52.503765  321981 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-933067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 02:37:52.511502  321981 kubeadm.go:319] [bootstrap-token] Using token: 9jq73n.z228vdd4tohsk679
	I1209 02:37:49.955290  325211 out.go:252]   - Generating certificates and keys ...
	I1209 02:37:49.955456  325211 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 02:37:49.955549  325211 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 02:37:50.208084  325211 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 02:37:50.461760  325211 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 02:37:50.507281  325211 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 02:37:50.822651  325211 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 02:37:51.000813  325211 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 02:37:51.001025  325211 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-933067 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1209 02:37:51.112629  325211 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 02:37:51.112862  325211 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-933067 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1209 02:37:51.500621  325211 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 02:37:51.700567  325211 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 02:37:51.925354  325211 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 02:37:51.925504  325211 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 02:37:52.017350  325211 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 02:37:52.370176  325211 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 02:37:52.657129  325211 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 02:37:53.222896  325211 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 02:37:53.372562  325211 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 02:37:53.373111  325211 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 02:37:53.376445  325211 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 09 02:37:42 embed-certs-485234 crio[779]: time="2025-12-09T02:37:42.811866925Z" level=info msg="Starting container: 3ee19204f910cbf4964a7b3975db4db074c421f1e703e62185129c5722a34e60" id=9740d135-824d-40d4-9ca1-46ebba47e17c name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:42 embed-certs-485234 crio[779]: time="2025-12-09T02:37:42.82878634Z" level=info msg="Started container" PID=1866 containerID=3ee19204f910cbf4964a7b3975db4db074c421f1e703e62185129c5722a34e60 description=kube-system/storage-provisioner/storage-provisioner id=9740d135-824d-40d4-9ca1-46ebba47e17c name=/runtime.v1.RuntimeService/StartContainer sandboxID=17c9256250d51763fff27b5cb30023a82970b516718a0caa8dc045bab5ceb5a1
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.089005845Z" level=info msg="Running pod sandbox: default/busybox/POD" id=428f4f56-2553-4447-b72e-9cc6b50f425a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.08908858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.095994641Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:52121d4da1b1b96858d53c18ff0d013117e43beb2e7d80216ef2ba9940c69d10 UID:a3353aeb-70fb-463b-850d-43e0507d25ee NetNS:/var/run/netns/2aaf3583-d00f-4621-9584-5789bdf53cc9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005a2630}] Aliases:map[]}"
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.096032423Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.108302151Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:52121d4da1b1b96858d53c18ff0d013117e43beb2e7d80216ef2ba9940c69d10 UID:a3353aeb-70fb-463b-850d-43e0507d25ee NetNS:/var/run/netns/2aaf3583-d00f-4621-9584-5789bdf53cc9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005a2630}] Aliases:map[]}"
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.108455105Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.10934351Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.110396512Z" level=info msg="Ran pod sandbox 52121d4da1b1b96858d53c18ff0d013117e43beb2e7d80216ef2ba9940c69d10 with infra container: default/busybox/POD" id=428f4f56-2553-4447-b72e-9cc6b50f425a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.11222877Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5bbb5869-5bd4-48c9-80a7-076d6ad9530b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.112346074Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5bbb5869-5bd4-48c9-80a7-076d6ad9530b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.112392355Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5bbb5869-5bd4-48c9-80a7-076d6ad9530b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.11335545Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c869f9a4-15d2-460d-8aac-43ee40e5eaf0 name=/runtime.v1.ImageService/PullImage
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.116297308Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.794299828Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c869f9a4-15d2-460d-8aac-43ee40e5eaf0 name=/runtime.v1.ImageService/PullImage
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.795420856Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eb0940e1-0f0a-409f-86ec-0a9a1a310cfa name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.801352155Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ab5fac90-c17a-4a3e-905d-82d080fe5586 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.805013607Z" level=info msg="Creating container: default/busybox/busybox" id=6255e4cc-6c7e-482a-aafe-f70ce29c6e74 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.805143782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.810894137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.811498878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.849898815Z" level=info msg="Created container accfe15e40f3b7dc09bee54b6eb69d222580ffbf9fd93279cbc2f0865b08a860: default/busybox/busybox" id=6255e4cc-6c7e-482a-aafe-f70ce29c6e74 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.850858143Z" level=info msg="Starting container: accfe15e40f3b7dc09bee54b6eb69d222580ffbf9fd93279cbc2f0865b08a860" id=29988b82-51ea-4937-a420-0c27cf260502 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:37:46 embed-certs-485234 crio[779]: time="2025-12-09T02:37:46.852987556Z" level=info msg="Started container" PID=1940 containerID=accfe15e40f3b7dc09bee54b6eb69d222580ffbf9fd93279cbc2f0865b08a860 description=default/busybox/busybox id=29988b82-51ea-4937-a420-0c27cf260502 name=/runtime.v1.RuntimeService/StartContainer sandboxID=52121d4da1b1b96858d53c18ff0d013117e43beb2e7d80216ef2ba9940c69d10
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	accfe15e40f3b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   52121d4da1b1b       busybox                                      default
	1cd0b850a3b3b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   21207ab85d213       coredns-66bc5c9577-sk4dm                     kube-system
	3ee19204f910c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   17c9256250d51       storage-provisioner                          kube-system
	731c938720433       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   3f9c0b716140a       kindnet-m72mz                                kube-system
	9766e979f6b7d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      22 seconds ago      Running             kube-proxy                0                   ab9eac1cc1109       kube-proxy-ldzjl                             kube-system
	862929284ebd1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   fe503e831fabd       etcd-embed-certs-485234                      kube-system
	e372aec76856c       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   f7483539ba712       kube-scheduler-embed-certs-485234            kube-system
	c62a612962cd1       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   cc5b09545d13d       kube-apiserver-embed-certs-485234            kube-system
	17c43479517a4       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   3ab5858745dda       kube-controller-manager-embed-certs-485234   kube-system
	
	
	==> coredns [1cd0b850a3b3b01c05414f0729cf9e649a386a38a40b1befa70bc859730d1858] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53305 - 28518 "HINFO IN 6407310817517668351.4802596034199627594. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079841477s
	
	
	==> describe nodes <==
	Name:               embed-certs-485234
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-485234
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=embed-certs-485234
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_37_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:37:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-485234
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:37:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:37:42 +0000   Tue, 09 Dec 2025 02:37:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:37:42 +0000   Tue, 09 Dec 2025 02:37:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:37:42 +0000   Tue, 09 Dec 2025 02:37:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:37:42 +0000   Tue, 09 Dec 2025 02:37:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-485234
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                e57d68b0-a212-4022-b9d5-5572cf2bedcf
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-sk4dm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-485234                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-m72mz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-485234             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-485234    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-ldzjl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-485234             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-485234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-485234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-485234 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node embed-certs-485234 event: Registered Node embed-certs-485234 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-485234 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [862929284ebd137e8d6c1d3a99b3bb94cab3d71b167113dc91dd48b2b75b69ab] <==
	{"level":"warn","ts":"2025-12-09T02:37:22.481245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:37:22.490008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:37:22.525282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:37:22.539408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:37:22.548081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:37:22.558093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:37:22.612946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54864","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T02:37:29.682845Z","caller":"traceutil/trace.go:172","msg":"trace[590651103] transaction","detail":"{read_only:false; response_revision:291; number_of_response:1; }","duration":"120.455682ms","start":"2025-12-09T02:37:29.562374Z","end":"2025-12-09T02:37:29.682829Z","steps":["trace[590651103] 'process raft request'  (duration: 120.346514ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:37:30.182619Z","caller":"traceutil/trace.go:172","msg":"trace[336754287] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"110.482155ms","start":"2025-12-09T02:37:30.072112Z","end":"2025-12-09T02:37:30.182595Z","steps":["trace[336754287] 'process raft request'  (duration: 110.401032ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:37:30.384943Z","caller":"traceutil/trace.go:172","msg":"trace[1458315136] transaction","detail":"{read_only:false; response_revision:299; number_of_response:1; }","duration":"121.242684ms","start":"2025-12-09T02:37:30.263679Z","end":"2025-12-09T02:37:30.384922Z","steps":["trace[1458315136] 'process raft request'  (duration: 117.725559ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:37:30.384956Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.115752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" limit:1 ","response":"range_response_count:1 size:203"}
	{"level":"info","ts":"2025-12-09T02:37:30.385029Z","caller":"traceutil/trace.go:172","msg":"trace[1284261519] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:298; }","duration":"123.212591ms","start":"2025-12-09T02:37:30.261801Z","end":"2025-12-09T02:37:30.385013Z","steps":["trace[1284261519] 'range keys from in-memory index tree'  (duration: 123.009726ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:37:30.501742Z","caller":"traceutil/trace.go:172","msg":"trace[1276519925] linearizableReadLoop","detail":"{readStateIndex:309; appliedIndex:309; }","duration":"120.398333ms","start":"2025-12-09T02:37:30.381325Z","end":"2025-12-09T02:37:30.501724Z","steps":["trace[1276519925] 'read index received'  (duration: 120.392289ms)","trace[1276519925] 'applied index is now lower than readState.Index'  (duration: 5.167µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:37:30.509412Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.304976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" limit:1 ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-12-09T02:37:30.509440Z","caller":"traceutil/trace.go:172","msg":"trace[1113945644] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"245.723095ms","start":"2025-12-09T02:37:30.263703Z","end":"2025-12-09T02:37:30.509426Z","steps":["trace[1113945644] 'process raft request'  (duration: 238.092045ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:37:30.509460Z","caller":"traceutil/trace.go:172","msg":"trace[5709638] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:299; }","duration":"147.369126ms","start":"2025-12-09T02:37:30.362083Z","end":"2025-12-09T02:37:30.509452Z","steps":["trace[5709638] 'agreement among raft nodes before linearized reading'  (duration: 139.71873ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:37:30.530177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.565406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-12-09T02:37:30.530229Z","caller":"traceutil/trace.go:172","msg":"trace[980115994] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:300; }","duration":"118.632644ms","start":"2025-12-09T02:37:30.411587Z","end":"2025-12-09T02:37:30.530219Z","steps":["trace[980115994] 'agreement among raft nodes before linearized reading'  (duration: 118.475183ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:37:30.530264Z","caller":"traceutil/trace.go:172","msg":"trace[21973220] transaction","detail":"{read_only:false; response_revision:303; number_of_response:1; }","duration":"140.937049ms","start":"2025-12-09T02:37:30.389313Z","end":"2025-12-09T02:37:30.530250Z","steps":["trace[21973220] 'process raft request'  (duration: 140.89595ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:37:30.530318Z","caller":"traceutil/trace.go:172","msg":"trace[788066951] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"142.305568ms","start":"2025-12-09T02:37:30.388000Z","end":"2025-12-09T02:37:30.530306Z","steps":["trace[788066951] 'process raft request'  (duration: 142.105739ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:37:30.530425Z","caller":"traceutil/trace.go:172","msg":"trace[761274499] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"142.120078ms","start":"2025-12-09T02:37:30.388289Z","end":"2025-12-09T02:37:30.530410Z","steps":["trace[761274499] 'process raft request'  (duration: 141.879263ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:37:41.121167Z","caller":"traceutil/trace.go:172","msg":"trace[1604128325] linearizableReadLoop","detail":"{readStateIndex:404; appliedIndex:404; }","duration":"109.710605ms","start":"2025-12-09T02:37:41.011433Z","end":"2025-12-09T02:37:41.121144Z","steps":["trace[1604128325] 'read index received'  (duration: 109.693111ms)","trace[1604128325] 'applied index is now lower than readState.Index'  (duration: 15.751µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:37:41.137740Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.293176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-485234\" limit:1 ","response":"range_response_count:1 size:5589"}
	{"level":"info","ts":"2025-12-09T02:37:41.137812Z","caller":"traceutil/trace.go:172","msg":"trace[1369758891] range","detail":"{range_begin:/registry/minions/embed-certs-485234; range_end:; response_count:1; response_revision:391; }","duration":"126.373436ms","start":"2025-12-09T02:37:41.011423Z","end":"2025-12-09T02:37:41.137797Z","steps":["trace[1369758891] 'agreement among raft nodes before linearized reading'  (duration: 109.788918ms)","trace[1369758891] 'range keys from in-memory index tree'  (duration: 16.392352ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-09T02:37:41.137818Z","caller":"traceutil/trace.go:172","msg":"trace[2115753611] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"132.032662ms","start":"2025-12-09T02:37:41.005768Z","end":"2025-12-09T02:37:41.137800Z","steps":["trace[2115753611] 'process raft request'  (duration: 115.416179ms)","trace[2115753611] 'compare'  (duration: 16.441494ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:37:54 up  1:20,  0 user,  load average: 4.92, 3.07, 2.07
	Linux embed-certs-485234 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [731c9387204330353b415bebf59b2a0d3a71ec4f2eedf368271718b7417cf08f] <==
	I1209 02:37:31.825317       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:37:31.825649       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1209 02:37:31.825805       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:37:31.825827       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:37:31.825856       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:37:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:37:32.122966       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:37:32.123007       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:37:32.123020       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:37:32.123191       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:37:32.523695       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:37:32.523775       1 metrics.go:72] Registering metrics
	I1209 02:37:32.524202       1 controller.go:711] "Syncing nftables rules"
	I1209 02:37:42.123936       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:37:42.123999       1 main.go:301] handling current node
	I1209 02:37:52.123522       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:37:52.123566       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c62a612962cd1e0bd6368a6a4d596758611ec4be5a4c91450beccf1e1dc5953e] <==
	E1209 02:37:23.266313       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1209 02:37:23.272990       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:37:23.280352       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:37:23.281049       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1209 02:37:23.289016       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:37:23.289444       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 02:37:23.469800       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:37:24.076600       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1209 02:37:24.080452       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1209 02:37:24.080474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:37:24.563145       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:37:24.600987       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:37:24.682252       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1209 02:37:24.689206       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1209 02:37:24.690266       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:37:24.694584       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:37:25.116394       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:37:25.585363       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:37:25.598112       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 02:37:25.605140       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 02:37:30.187476       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1209 02:37:31.018317       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:37:31.220911       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:37:31.235848       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1209 02:37:52.877275       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:36046: use of closed network connection
	
	
	==> kube-controller-manager [17c43479517a4a78b916c5f351c7c18f3c2c369db698d99785886fe384c31809] <==
	I1209 02:37:30.115866       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1209 02:37:30.115882       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 02:37:30.115920       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 02:37:30.115970       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 02:37:30.115977       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 02:37:30.117228       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1209 02:37:30.117252       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1209 02:37:30.119479       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 02:37:30.119498       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 02:37:30.122193       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1209 02:37:30.122254       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1209 02:37:30.122291       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1209 02:37:30.122296       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1209 02:37:30.122302       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 02:37:30.124393       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1209 02:37:30.126677       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1209 02:37:30.126900       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:37:30.131248       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:37:30.135419       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 02:37:30.150894       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:37:30.150908       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1209 02:37:30.150915       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1209 02:37:30.150924       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:37:30.226338       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-485234" podCIDRs=["10.244.0.0/24"]
	I1209 02:37:45.073080       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9766e979f6b7d38737b1744f6e14d56b9852c329bbeecc2e8d688935169f4878] <==
	I1209 02:37:31.589015       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:37:31.656741       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:37:31.757594       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:37:31.757658       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1209 02:37:31.757770       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:37:31.781946       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:37:31.782015       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:37:31.789139       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:37:31.789544       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:37:31.789586       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:37:31.794491       1 config.go:309] "Starting node config controller"
	I1209 02:37:31.794564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:37:31.794585       1 config.go:200] "Starting service config controller"
	I1209 02:37:31.796140       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:37:31.794594       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:37:31.794909       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:37:31.796188       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:37:31.794610       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:37:31.796205       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:37:31.897072       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:37:31.897098       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:37:31.897104       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e372aec76856c82daca98c6697a3e1086a16c6b284532845f6b673761463c8eb] <==
	E1209 02:37:23.151583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 02:37:23.151597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 02:37:23.151682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 02:37:23.151691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 02:37:23.151746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 02:37:23.151773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 02:37:23.151787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 02:37:23.151714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 02:37:23.151866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 02:37:23.151875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 02:37:23.151887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 02:37:23.151902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 02:37:23.967323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 02:37:23.987605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1209 02:37:24.063586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 02:37:24.083845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 02:37:24.132041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 02:37:24.244032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 02:37:24.309582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 02:37:24.328010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 02:37:24.342074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1209 02:37:24.354683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 02:37:24.358914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 02:37:24.378169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1209 02:37:26.546391       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: I1209 02:37:30.316627    1332 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: I1209 02:37:30.583495    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5960df0e-74d0-4df0-a55b-e02828d2b755-xtables-lock\") pod \"kube-proxy-ldzjl\" (UID: \"5960df0e-74d0-4df0-a55b-e02828d2b755\") " pod="kube-system/kube-proxy-ldzjl"
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: I1209 02:37:30.583531    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5960df0e-74d0-4df0-a55b-e02828d2b755-kube-proxy\") pod \"kube-proxy-ldzjl\" (UID: \"5960df0e-74d0-4df0-a55b-e02828d2b755\") " pod="kube-system/kube-proxy-ldzjl"
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: I1209 02:37:30.583549    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5960df0e-74d0-4df0-a55b-e02828d2b755-lib-modules\") pod \"kube-proxy-ldzjl\" (UID: \"5960df0e-74d0-4df0-a55b-e02828d2b755\") " pod="kube-system/kube-proxy-ldzjl"
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: I1209 02:37:30.583567    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4rf\" (UniqueName: \"kubernetes.io/projected/5960df0e-74d0-4df0-a55b-e02828d2b755-kube-api-access-ch4rf\") pod \"kube-proxy-ldzjl\" (UID: \"5960df0e-74d0-4df0-a55b-e02828d2b755\") " pod="kube-system/kube-proxy-ldzjl"
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: I1209 02:37:30.684655    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dktw2\" (UniqueName: \"kubernetes.io/projected/f5bc8f03-4058-446e-9c8b-af2472536ab6-kube-api-access-dktw2\") pod \"kindnet-m72mz\" (UID: \"f5bc8f03-4058-446e-9c8b-af2472536ab6\") " pod="kube-system/kindnet-m72mz"
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: I1209 02:37:30.684718    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5bc8f03-4058-446e-9c8b-af2472536ab6-xtables-lock\") pod \"kindnet-m72mz\" (UID: \"f5bc8f03-4058-446e-9c8b-af2472536ab6\") " pod="kube-system/kindnet-m72mz"
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: I1209 02:37:30.684747    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5bc8f03-4058-446e-9c8b-af2472536ab6-lib-modules\") pod \"kindnet-m72mz\" (UID: \"f5bc8f03-4058-446e-9c8b-af2472536ab6\") " pod="kube-system/kindnet-m72mz"
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: I1209 02:37:30.684874    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f5bc8f03-4058-446e-9c8b-af2472536ab6-cni-cfg\") pod \"kindnet-m72mz\" (UID: \"f5bc8f03-4058-446e-9c8b-af2472536ab6\") " pod="kube-system/kindnet-m72mz"
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: E1209 02:37:30.740866    1332 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: E1209 02:37:30.740900    1332 projected.go:196] Error preparing data for projected volume kube-api-access-ch4rf for pod kube-system/kube-proxy-ldzjl: configmap "kube-root-ca.crt" not found
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: E1209 02:37:30.740992    1332 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5960df0e-74d0-4df0-a55b-e02828d2b755-kube-api-access-ch4rf podName:5960df0e-74d0-4df0-a55b-e02828d2b755 nodeName:}" failed. No retries permitted until 2025-12-09 02:37:31.240965077 +0000 UTC m=+5.861269913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ch4rf" (UniqueName: "kubernetes.io/projected/5960df0e-74d0-4df0-a55b-e02828d2b755-kube-api-access-ch4rf") pod "kube-proxy-ldzjl" (UID: "5960df0e-74d0-4df0-a55b-e02828d2b755") : configmap "kube-root-ca.crt" not found
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: E1209 02:37:30.791155    1332 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: E1209 02:37:30.791191    1332 projected.go:196] Error preparing data for projected volume kube-api-access-dktw2 for pod kube-system/kindnet-m72mz: configmap "kube-root-ca.crt" not found
	Dec 09 02:37:30 embed-certs-485234 kubelet[1332]: E1209 02:37:30.791278    1332 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f5bc8f03-4058-446e-9c8b-af2472536ab6-kube-api-access-dktw2 podName:f5bc8f03-4058-446e-9c8b-af2472536ab6 nodeName:}" failed. No retries permitted until 2025-12-09 02:37:31.291257109 +0000 UTC m=+5.911561938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dktw2" (UniqueName: "kubernetes.io/projected/f5bc8f03-4058-446e-9c8b-af2472536ab6-kube-api-access-dktw2") pod "kindnet-m72mz" (UID: "f5bc8f03-4058-446e-9c8b-af2472536ab6") : configmap "kube-root-ca.crt" not found
	Dec 09 02:37:32 embed-certs-485234 kubelet[1332]: I1209 02:37:32.513896    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ldzjl" podStartSLOduration=2.513872642 podStartE2EDuration="2.513872642s" podCreationTimestamp="2025-12-09 02:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:37:32.513203315 +0000 UTC m=+7.133508154" watchObservedRunningTime="2025-12-09 02:37:32.513872642 +0000 UTC m=+7.134177481"
	Dec 09 02:37:33 embed-certs-485234 kubelet[1332]: I1209 02:37:33.157558    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m72mz" podStartSLOduration=3.157535018 podStartE2EDuration="3.157535018s" podCreationTimestamp="2025-12-09 02:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:37:32.528686942 +0000 UTC m=+7.148991780" watchObservedRunningTime="2025-12-09 02:37:33.157535018 +0000 UTC m=+7.777839856"
	Dec 09 02:37:42 embed-certs-485234 kubelet[1332]: I1209 02:37:42.395206    1332 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 09 02:37:42 embed-certs-485234 kubelet[1332]: I1209 02:37:42.467366    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bc9e893-f0f2-4783-8ded-7fd6e4cd1785-config-volume\") pod \"coredns-66bc5c9577-sk4dm\" (UID: \"8bc9e893-f0f2-4783-8ded-7fd6e4cd1785\") " pod="kube-system/coredns-66bc5c9577-sk4dm"
	Dec 09 02:37:42 embed-certs-485234 kubelet[1332]: I1209 02:37:42.467414    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1c12b52a-cef6-4eb8-adcf-d4a09a3c46a2-tmp\") pod \"storage-provisioner\" (UID: \"1c12b52a-cef6-4eb8-adcf-d4a09a3c46a2\") " pod="kube-system/storage-provisioner"
	Dec 09 02:37:42 embed-certs-485234 kubelet[1332]: I1209 02:37:42.467445    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn5m8\" (UniqueName: \"kubernetes.io/projected/1c12b52a-cef6-4eb8-adcf-d4a09a3c46a2-kube-api-access-mn5m8\") pod \"storage-provisioner\" (UID: \"1c12b52a-cef6-4eb8-adcf-d4a09a3c46a2\") " pod="kube-system/storage-provisioner"
	Dec 09 02:37:42 embed-certs-485234 kubelet[1332]: I1209 02:37:42.467473    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszxx\" (UniqueName: \"kubernetes.io/projected/8bc9e893-f0f2-4783-8ded-7fd6e4cd1785-kube-api-access-nszxx\") pod \"coredns-66bc5c9577-sk4dm\" (UID: \"8bc9e893-f0f2-4783-8ded-7fd6e4cd1785\") " pod="kube-system/coredns-66bc5c9577-sk4dm"
	Dec 09 02:37:43 embed-certs-485234 kubelet[1332]: I1209 02:37:43.550278    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.550259383 podStartE2EDuration="12.550259383s" podCreationTimestamp="2025-12-09 02:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:37:43.541178512 +0000 UTC m=+18.161483350" watchObservedRunningTime="2025-12-09 02:37:43.550259383 +0000 UTC m=+18.170564226"
	Dec 09 02:37:43 embed-certs-485234 kubelet[1332]: I1209 02:37:43.550498    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sk4dm" podStartSLOduration=12.550491992 podStartE2EDuration="12.550491992s" podCreationTimestamp="2025-12-09 02:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:37:43.550449025 +0000 UTC m=+18.170753858" watchObservedRunningTime="2025-12-09 02:37:43.550491992 +0000 UTC m=+18.170796830"
	Dec 09 02:37:45 embed-certs-485234 kubelet[1332]: I1209 02:37:45.884931    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkgxj\" (UniqueName: \"kubernetes.io/projected/a3353aeb-70fb-463b-850d-43e0507d25ee-kube-api-access-dkgxj\") pod \"busybox\" (UID: \"a3353aeb-70fb-463b-850d-43e0507d25ee\") " pod="default/busybox"
	
	
	==> storage-provisioner [3ee19204f910cbf4964a7b3975db4db074c421f1e703e62185129c5722a34e60] <==
	I1209 02:37:42.855834       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:37:42.866210       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:37:42.866441       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:37:42.870423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:42.892159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:37:42.892345       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:37:42.892985       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5aeea5d1-f9d5-472a-8ee4-5bcc362f6ec9", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-485234_15350186-4cb2-40a4-a04b-0324b209433d became leader
	I1209 02:37:42.893044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-485234_15350186-4cb2-40a4-a04b-0324b209433d!
	W1209 02:37:42.895708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:42.906502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:37:42.993765       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-485234_15350186-4cb2-40a4-a04b-0324b209433d!
	W1209 02:37:44.910181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:44.915254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:46.917665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:46.923069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:48.928803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:48.935178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:50.938174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:50.941777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:52.944973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:37:52.949594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-485234 -n embed-certs-485234
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-485234 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-485234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-485234 --alsologtostderr -v=1: exit status 80 (2.418556882s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-485234 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:39:11.705777  355030 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:39:11.705889  355030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:39:11.705899  355030 out.go:374] Setting ErrFile to fd 2...
	I1209 02:39:11.705902  355030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:39:11.706119  355030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:39:11.706336  355030 out.go:368] Setting JSON to false
	I1209 02:39:11.706352  355030 mustload.go:66] Loading cluster: embed-certs-485234
	I1209 02:39:11.706674  355030 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:39:11.707049  355030 cli_runner.go:164] Run: docker container inspect embed-certs-485234 --format={{.State.Status}}
	I1209 02:39:11.725142  355030 host.go:66] Checking if "embed-certs-485234" exists ...
	I1209 02:39:11.725428  355030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:39:11.780177  355030 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-09 02:39:11.770283038 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:39:11.809897  355030 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-485234 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1209 02:39:11.862765  355030 out.go:179] * Pausing node embed-certs-485234 ... 
	I1209 02:39:11.872679  355030 host.go:66] Checking if "embed-certs-485234" exists ...
	I1209 02:39:11.874097  355030 ssh_runner.go:195] Run: systemctl --version
	I1209 02:39:11.874155  355030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-485234
	I1209 02:39:11.894583  355030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/embed-certs-485234/id_rsa Username:docker}
	I1209 02:39:11.990898  355030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:39:12.003383  355030 pause.go:52] kubelet running: true
	I1209 02:39:12.003453  355030 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:39:12.163446  355030 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:39:12.163554  355030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:39:12.239002  355030 cri.go:89] found id: "61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145"
	I1209 02:39:12.239027  355030 cri.go:89] found id: "ac0b7af3de0317caddf4d550c0e9ea234551e5f50c9fc7ea462dc8bc6b281b6d"
	I1209 02:39:12.239035  355030 cri.go:89] found id: "bdcdd909963275234ca5ab86ece711497b2c83edef0f3bf455c0278f574ab64e"
	I1209 02:39:12.239040  355030 cri.go:89] found id: "c623235e887143b6c59c75b4efff2a8935ff6e87604fbf00895ae925bf1ea296"
	I1209 02:39:12.239044  355030 cri.go:89] found id: "61b6510c4ee0e600ee2d8713affc5230566f95c3c62e347aabb29817104c56a8"
	I1209 02:39:12.239049  355030 cri.go:89] found id: "9a18851e4fed459b0910fdd3ea91834db962f9676a200db349876cbe34a7a2dc"
	I1209 02:39:12.239054  355030 cri.go:89] found id: "a25f764bedd8c070035e47208797683eec3e7707b255c4203f6216099003061b"
	I1209 02:39:12.239058  355030 cri.go:89] found id: "c005019871649c13a8dc79cc3b49d854c135ac71f085513bec085b210e679265"
	I1209 02:39:12.239065  355030 cri.go:89] found id: "6bedda73910b696dcf23480b8a56d9ad573984aa03ac183ea9091d6bdc9f522e"
	I1209 02:39:12.239089  355030 cri.go:89] found id: "c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0"
	I1209 02:39:12.239094  355030 cri.go:89] found id: "55fee4ca21f23f4f6a1737ed43fa72fae1199f3a8dee15cbb2ccf0b489ae0266"
	I1209 02:39:12.239099  355030 cri.go:89] found id: ""
	I1209 02:39:12.239142  355030 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:39:12.255267  355030 retry.go:31] will retry after 222.822166ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:39:12Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:39:12.478731  355030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:39:12.494750  355030 pause.go:52] kubelet running: false
	I1209 02:39:12.494813  355030 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:39:12.697515  355030 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:39:12.697611  355030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:39:12.786090  355030 cri.go:89] found id: "61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145"
	I1209 02:39:12.786122  355030 cri.go:89] found id: "ac0b7af3de0317caddf4d550c0e9ea234551e5f50c9fc7ea462dc8bc6b281b6d"
	I1209 02:39:12.786129  355030 cri.go:89] found id: "bdcdd909963275234ca5ab86ece711497b2c83edef0f3bf455c0278f574ab64e"
	I1209 02:39:12.786134  355030 cri.go:89] found id: "c623235e887143b6c59c75b4efff2a8935ff6e87604fbf00895ae925bf1ea296"
	I1209 02:39:12.786138  355030 cri.go:89] found id: "61b6510c4ee0e600ee2d8713affc5230566f95c3c62e347aabb29817104c56a8"
	I1209 02:39:12.786144  355030 cri.go:89] found id: "9a18851e4fed459b0910fdd3ea91834db962f9676a200db349876cbe34a7a2dc"
	I1209 02:39:12.786149  355030 cri.go:89] found id: "a25f764bedd8c070035e47208797683eec3e7707b255c4203f6216099003061b"
	I1209 02:39:12.786154  355030 cri.go:89] found id: "c005019871649c13a8dc79cc3b49d854c135ac71f085513bec085b210e679265"
	I1209 02:39:12.786158  355030 cri.go:89] found id: "6bedda73910b696dcf23480b8a56d9ad573984aa03ac183ea9091d6bdc9f522e"
	I1209 02:39:12.786166  355030 cri.go:89] found id: "c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0"
	I1209 02:39:12.786171  355030 cri.go:89] found id: "55fee4ca21f23f4f6a1737ed43fa72fae1199f3a8dee15cbb2ccf0b489ae0266"
	I1209 02:39:12.786175  355030 cri.go:89] found id: ""
	I1209 02:39:12.786221  355030 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:39:12.798882  355030 retry.go:31] will retry after 346.410918ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:39:12Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:39:13.146148  355030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:39:13.158948  355030 pause.go:52] kubelet running: false
	I1209 02:39:13.159000  355030 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:39:13.323353  355030 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:39:13.323434  355030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:39:13.392064  355030 cri.go:89] found id: "61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145"
	I1209 02:39:13.392089  355030 cri.go:89] found id: "ac0b7af3de0317caddf4d550c0e9ea234551e5f50c9fc7ea462dc8bc6b281b6d"
	I1209 02:39:13.392096  355030 cri.go:89] found id: "bdcdd909963275234ca5ab86ece711497b2c83edef0f3bf455c0278f574ab64e"
	I1209 02:39:13.392102  355030 cri.go:89] found id: "c623235e887143b6c59c75b4efff2a8935ff6e87604fbf00895ae925bf1ea296"
	I1209 02:39:13.392106  355030 cri.go:89] found id: "61b6510c4ee0e600ee2d8713affc5230566f95c3c62e347aabb29817104c56a8"
	I1209 02:39:13.392111  355030 cri.go:89] found id: "9a18851e4fed459b0910fdd3ea91834db962f9676a200db349876cbe34a7a2dc"
	I1209 02:39:13.392116  355030 cri.go:89] found id: "a25f764bedd8c070035e47208797683eec3e7707b255c4203f6216099003061b"
	I1209 02:39:13.392121  355030 cri.go:89] found id: "c005019871649c13a8dc79cc3b49d854c135ac71f085513bec085b210e679265"
	I1209 02:39:13.392129  355030 cri.go:89] found id: "6bedda73910b696dcf23480b8a56d9ad573984aa03ac183ea9091d6bdc9f522e"
	I1209 02:39:13.392150  355030 cri.go:89] found id: "c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0"
	I1209 02:39:13.392159  355030 cri.go:89] found id: "55fee4ca21f23f4f6a1737ed43fa72fae1199f3a8dee15cbb2ccf0b489ae0266"
	I1209 02:39:13.392163  355030 cri.go:89] found id: ""
	I1209 02:39:13.392210  355030 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:39:13.404530  355030 retry.go:31] will retry after 387.715482ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:39:13Z" level=error msg="open /run/runc: no such file or directory"
	I1209 02:39:13.792838  355030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:39:13.808058  355030 pause.go:52] kubelet running: false
	I1209 02:39:13.808122  355030 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 02:39:13.967761  355030 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 02:39:13.967856  355030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 02:39:14.038760  355030 cri.go:89] found id: "61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145"
	I1209 02:39:14.038782  355030 cri.go:89] found id: "ac0b7af3de0317caddf4d550c0e9ea234551e5f50c9fc7ea462dc8bc6b281b6d"
	I1209 02:39:14.038787  355030 cri.go:89] found id: "bdcdd909963275234ca5ab86ece711497b2c83edef0f3bf455c0278f574ab64e"
	I1209 02:39:14.038790  355030 cri.go:89] found id: "c623235e887143b6c59c75b4efff2a8935ff6e87604fbf00895ae925bf1ea296"
	I1209 02:39:14.038793  355030 cri.go:89] found id: "61b6510c4ee0e600ee2d8713affc5230566f95c3c62e347aabb29817104c56a8"
	I1209 02:39:14.038800  355030 cri.go:89] found id: "9a18851e4fed459b0910fdd3ea91834db962f9676a200db349876cbe34a7a2dc"
	I1209 02:39:14.038802  355030 cri.go:89] found id: "a25f764bedd8c070035e47208797683eec3e7707b255c4203f6216099003061b"
	I1209 02:39:14.038805  355030 cri.go:89] found id: "c005019871649c13a8dc79cc3b49d854c135ac71f085513bec085b210e679265"
	I1209 02:39:14.038808  355030 cri.go:89] found id: "6bedda73910b696dcf23480b8a56d9ad573984aa03ac183ea9091d6bdc9f522e"
	I1209 02:39:14.038818  355030 cri.go:89] found id: "c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0"
	I1209 02:39:14.038822  355030 cri.go:89] found id: "55fee4ca21f23f4f6a1737ed43fa72fae1199f3a8dee15cbb2ccf0b489ae0266"
	I1209 02:39:14.038824  355030 cri.go:89] found id: ""
	I1209 02:39:14.038859  355030 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 02:39:14.052767  355030 out.go:203] 
	W1209 02:39:14.053835  355030 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:39:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:39:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 02:39:14.053849  355030 out.go:285] * 
	* 
	W1209 02:39:14.057916  355030 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 02:39:14.059697  355030 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-485234 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-485234
helpers_test.go:243: (dbg) docker inspect embed-certs-485234:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a",
	        "Created": "2025-12-09T02:37:10.901046477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 332720,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:38:15.535415853Z",
	            "FinishedAt": "2025-12-09T02:38:13.072862021Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/hosts",
	        "LogPath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a-json.log",
	        "Name": "/embed-certs-485234",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-485234:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-485234",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a",
	                "LowerDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004/merged",
	                "UpperDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004/diff",
	                "WorkDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-485234",
	                "Source": "/var/lib/docker/volumes/embed-certs-485234/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-485234",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-485234",
	                "name.minikube.sigs.k8s.io": "embed-certs-485234",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "564096b9f2091367c7b8488d5d46973e5fcbd32d9d85fbe583fe3fa465353b85",
	            "SandboxKey": "/var/run/docker/netns/564096b9f209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-485234": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "65c970efd44f13df8727d193873c6259ce2c56f73ef1221ef78d5983f99951ba",
	                    "EndpointID": "420b7aed494d57f523dd904ad3be55b3ee601dcef4eb120f99bb43b76fe7d4f6",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "1e:a2:1b:b1:38:f6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-485234",
	                        "2220a87a1394"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-485234 -n embed-certs-485234
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-485234 -n embed-certs-485234: exit status 2 (320.532734ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-485234 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-485234 logs -n 25: (1.154762197s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-933067 sudo systemctl cat kubelet --no-pager                                                                                   │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │ 09 Dec 25 02:38 UTC │
	│ ssh     │ -p calico-933067 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │ 09 Dec 25 02:38 UTC │
	│ ssh     │ -p calico-933067 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │ 09 Dec 25 02:38 UTC │
	│ ssh     │ -p calico-933067 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │ 09 Dec 25 02:38 UTC │
	│ ssh     │ -p calico-933067 sudo systemctl status docker --all --full --no-pager                                                                    │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │                     │
	│ ssh     │ -p calico-933067 sudo systemctl cat docker --no-pager                                                                                    │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │ 09 Dec 25 02:38 UTC │
	│ ssh     │ -p calico-933067 sudo cat /etc/docker/daemon.json                                                                                        │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │                     │
	│ ssh     │ -p calico-933067 sudo docker system info                                                                                                 │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │                     │
	│ ssh     │ -p calico-933067 sudo systemctl status cri-docker --all --full --no-pager                                                                │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │                     │
	│ ssh     │ -p calico-933067 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │                     │
	│ ssh     │ -p calico-933067 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo cri-dockerd --version                                                                                              │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo systemctl status containerd --all --full --no-pager                                                                │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │                     │
	│ ssh     │ -p calico-933067 sudo systemctl cat containerd --no-pager                                                                                │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo cat /lib/systemd/system/containerd.service                                                                         │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo cat /etc/containerd/config.toml                                                                                    │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo containerd config dump                                                                                             │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo systemctl status crio --all --full --no-pager                                                                      │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo systemctl cat crio --no-pager                                                                                      │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo crio config                                                                                                        │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ delete  │ -p calico-933067                                                                                                                         │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ start   │ -p flannel-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio │ flannel-933067     │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │                     │
	│ image   │ embed-certs-485234 image list --format=json                                                                                              │ embed-certs-485234 │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ pause   │ -p embed-certs-485234 --alsologtostderr -v=1                                                                                             │ embed-certs-485234 │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:39:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:39:06.694118  353996 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:39:06.694389  353996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:39:06.694398  353996 out.go:374] Setting ErrFile to fd 2...
	I1209 02:39:06.694402  353996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:39:06.694590  353996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:39:06.695111  353996 out.go:368] Setting JSON to false
	I1209 02:39:06.696442  353996 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4896,"bootTime":1765243051,"procs":423,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:39:06.696508  353996 start.go:143] virtualization: kvm guest
	I1209 02:39:06.698379  353996 out.go:179] * [flannel-933067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:39:06.699606  353996 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:39:06.699609  353996 notify.go:221] Checking for updates...
	I1209 02:39:06.700920  353996 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:39:06.702119  353996 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:39:06.703797  353996 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:39:06.705077  353996 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:39:06.706354  353996 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:39:06.709255  353996 config.go:182] Loaded profile config "custom-flannel-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:39:06.709386  353996 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:39:06.709513  353996 config.go:182] Loaded profile config "enable-default-cni-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:39:06.709648  353996 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:39:06.735215  353996 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:39:06.735329  353996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:39:06.795271  353996 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-09 02:39:06.784678394 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:39:06.795379  353996 docker.go:319] overlay module found
	I1209 02:39:06.797138  353996 out.go:179] * Using the docker driver based on user configuration
	I1209 02:39:06.798318  353996 start.go:309] selected driver: docker
	I1209 02:39:06.798333  353996 start.go:927] validating driver "docker" against <nil>
	I1209 02:39:06.798343  353996 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:39:06.799107  353996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:39:06.867982  353996 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-09 02:39:06.85747639 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:39:06.868200  353996 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:39:06.868497  353996 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:39:06.870094  353996 out.go:179] * Using Docker driver with root privileges
	I1209 02:39:06.872123  353996 cni.go:84] Creating CNI manager for "flannel"
	I1209 02:39:06.872148  353996 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1209 02:39:06.872224  353996 start.go:353] cluster config:
	{Name:flannel-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:39:06.873549  353996 out.go:179] * Starting "flannel-933067" primary control-plane node in "flannel-933067" cluster
	I1209 02:39:06.874590  353996 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:39:06.875971  353996 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:39:06.877068  353996 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:39:06.877103  353996 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:39:06.877116  353996 cache.go:65] Caching tarball of preloaded images
	I1209 02:39:06.877171  353996 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:39:06.877212  353996 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:39:06.877230  353996 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:39:06.877358  353996 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/flannel-933067/config.json ...
	I1209 02:39:06.877385  353996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/flannel-933067/config.json: {Name:mk540d9d59c959672c7e95943fda0330a7701480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:39:06.903888  353996 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:39:06.903911  353996 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:39:06.903931  353996 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:39:06.903965  353996 start.go:360] acquireMachinesLock for flannel-933067: {Name:mk8839338c3c46860e97c16dfe24b20d0b3adaa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:39:06.904068  353996 start.go:364] duration metric: took 79.684µs to acquireMachinesLock for "flannel-933067"
	I1209 02:39:06.904105  353996 start.go:93] Provisioning new machine with config: &{Name:flannel-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-933067 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:39:06.904200  353996 start.go:125] createHost starting for "" (driver="docker")
	I1209 02:39:06.441036  346776 out.go:252]   - Booting up control plane ...
	I1209 02:39:06.441182  346776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 02:39:06.441324  346776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 02:39:06.442378  346776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 02:39:06.458615  346776 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 02:39:06.458868  346776 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 02:39:06.467128  346776 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 02:39:06.467467  346776 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 02:39:06.467518  346776 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 02:39:06.563003  346776 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 02:39:06.563188  346776 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 02:39:07.564652  346776 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001854884s
	I1209 02:39:07.567943  346776 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 02:39:07.568118  346776 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1209 02:39:07.568224  346776 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 02:39:07.568318  346776 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 02:39:06.192312  341866 addons.go:530] duration metric: took 556.38228ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 02:39:06.484812  341866 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-933067" context rescaled to 1 replicas
	W1209 02:39:07.990156  341866 node_ready.go:57] node "custom-flannel-933067" has "Ready":"False" status (will retry)
	I1209 02:39:06.905968  353996 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:39:06.906242  353996 start.go:159] libmachine.API.Create for "flannel-933067" (driver="docker")
	I1209 02:39:06.906282  353996 client.go:173] LocalClient.Create starting
	I1209 02:39:06.906359  353996 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:39:06.906401  353996 main.go:143] libmachine: Decoding PEM data...
	I1209 02:39:06.906429  353996 main.go:143] libmachine: Parsing certificate...
	I1209 02:39:06.906495  353996 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:39:06.906523  353996 main.go:143] libmachine: Decoding PEM data...
	I1209 02:39:06.906541  353996 main.go:143] libmachine: Parsing certificate...
	I1209 02:39:06.907005  353996 cli_runner.go:164] Run: docker network inspect flannel-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:39:06.926440  353996 cli_runner.go:211] docker network inspect flannel-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:39:06.926494  353996 network_create.go:284] running [docker network inspect flannel-933067] to gather additional debugging logs...
	I1209 02:39:06.926517  353996 cli_runner.go:164] Run: docker network inspect flannel-933067
	W1209 02:39:06.945180  353996 cli_runner.go:211] docker network inspect flannel-933067 returned with exit code 1
	I1209 02:39:06.945214  353996 network_create.go:287] error running [docker network inspect flannel-933067]: docker network inspect flannel-933067: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-933067 not found
	I1209 02:39:06.945233  353996 network_create.go:289] output of [docker network inspect flannel-933067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-933067 not found
	
	** /stderr **
	I1209 02:39:06.945361  353996 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:39:06.965357  353996 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:39:06.965978  353996 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:39:06.966876  353996 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:39:06.968004  353996 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d05b99ab678b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fa:0c:2a:78:b3:03} reservation:<nil>}
	I1209 02:39:06.968876  353996 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-104636b6d5da IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:83:e2:02:a0:69} reservation:<nil>}
	I1209 02:39:06.969539  353996 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65c970efd44f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:7e:8e:00:ff:ef:6f} reservation:<nil>}
	I1209 02:39:06.970697  353996 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec7a50}
	I1209 02:39:06.970724  353996 network_create.go:124] attempt to create docker network flannel-933067 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1209 02:39:06.970784  353996 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-933067 flannel-933067
	I1209 02:39:07.028353  353996 network_create.go:108] docker network flannel-933067 192.168.103.0/24 created
	I1209 02:39:07.028383  353996 kic.go:121] calculated static IP "192.168.103.2" for the "flannel-933067" container
	I1209 02:39:07.028445  353996 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:39:07.048303  353996 cli_runner.go:164] Run: docker volume create flannel-933067 --label name.minikube.sigs.k8s.io=flannel-933067 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:39:07.069306  353996 oci.go:103] Successfully created a docker volume flannel-933067
	I1209 02:39:07.069393  353996 cli_runner.go:164] Run: docker run --rm --name flannel-933067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-933067 --entrypoint /usr/bin/test -v flannel-933067:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:39:07.544379  353996 oci.go:107] Successfully prepared a docker volume flannel-933067
	I1209 02:39:07.544465  353996 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:39:07.544480  353996 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 02:39:07.544582  353996 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v flannel-933067:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 02:39:10.052595  346776 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.484574752s
	I1209 02:39:10.721619  346776 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.153674559s
	I1209 02:39:13.070261  346776 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502303136s
	I1209 02:39:13.086962  346776 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 02:39:13.096846  346776 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 02:39:13.107967  346776 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 02:39:13.108245  346776 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-933067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 02:39:13.122683  346776 kubeadm.go:319] [bootstrap-token] Using token: lm3e74.zdygmvr5eabou70e
	I1209 02:39:13.123832  346776 out.go:252]   - Configuring RBAC rules ...
	I1209 02:39:13.124000  346776 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 02:39:13.127503  346776 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 02:39:13.133375  346776 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 02:39:13.135837  346776 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 02:39:13.138185  346776 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 02:39:13.140783  346776 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 02:39:13.476580  346776 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 02:39:13.892189  346776 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 02:39:14.476408  346776 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 02:39:14.477328  346776 kubeadm.go:319] 
	I1209 02:39:14.477388  346776 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 02:39:14.477401  346776 kubeadm.go:319] 
	I1209 02:39:14.477499  346776 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 02:39:14.477522  346776 kubeadm.go:319] 
	I1209 02:39:14.477564  346776 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 02:39:14.477626  346776 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 02:39:14.477719  346776 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 02:39:14.477735  346776 kubeadm.go:319] 
	I1209 02:39:14.477792  346776 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 02:39:14.477810  346776 kubeadm.go:319] 
	I1209 02:39:14.477893  346776 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 02:39:14.477905  346776 kubeadm.go:319] 
	I1209 02:39:14.478004  346776 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 02:39:14.478109  346776 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 02:39:14.478207  346776 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 02:39:14.478217  346776 kubeadm.go:319] 
	I1209 02:39:14.478360  346776 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 02:39:14.478490  346776 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 02:39:14.478499  346776 kubeadm.go:319] 
	I1209 02:39:14.478656  346776 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lm3e74.zdygmvr5eabou70e \
	I1209 02:39:14.478790  346776 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf \
	I1209 02:39:14.478840  346776 kubeadm.go:319] 	--control-plane 
	I1209 02:39:14.478850  346776 kubeadm.go:319] 
	I1209 02:39:14.478955  346776 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 02:39:14.478963  346776 kubeadm.go:319] 
	I1209 02:39:14.479064  346776 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lm3e74.zdygmvr5eabou70e \
	I1209 02:39:14.479205  346776 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf 
	I1209 02:39:14.481950  346776 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1209 02:39:14.482111  346776 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 02:39:14.482145  346776 cni.go:84] Creating CNI manager for "bridge"
	I1209 02:39:14.483475  346776 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 09 02:38:36 embed-certs-485234 crio[567]: time="2025-12-09T02:38:36.277936107Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 02:38:36 embed-certs-485234 crio[567]: time="2025-12-09T02:38:36.281853011Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 02:38:36 embed-certs-485234 crio[567]: time="2025-12-09T02:38:36.281874954Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.393007548Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1076913f-082b-4559-ac2e-182004787f38 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.396347249Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e7bdc1de-a1ce-4ba6-8999-331e054c49da name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.399571392Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr/dashboard-metrics-scraper" id=79b46204-67cc-45c3-a874-a146d84133e9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.399756443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.408167507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.408941055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.440921974Z" level=info msg="Created container c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr/dashboard-metrics-scraper" id=79b46204-67cc-45c3-a874-a146d84133e9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.441582519Z" level=info msg="Starting container: c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0" id=0f0802ad-41ce-4c68-9809-752aa358f681 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.443854847Z" level=info msg="Started container" PID=1762 containerID=c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr/dashboard-metrics-scraper id=0f0802ad-41ce-4c68-9809-752aa358f681 name=/runtime.v1.RuntimeService/StartContainer sandboxID=453fb6d7a6d5cb5c7627c51560b109ce7231bc92ab5c250c2f858c0e0b8cf475
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.509739462Z" level=info msg="Removing container: 71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577" id=9339be47-2c3d-4f99-b0f9-2badee6da668 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.524599835Z" level=info msg="Removed container 71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr/dashboard-metrics-scraper" id=9339be47-2c3d-4f99-b0f9-2badee6da668 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.531215595Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d0ad61c9-b644-4c22-8132-db5246475783 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.532208601Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f68ffd0d-0018-4a63-b413-6089d45f0a4e name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.533267616Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=24a0ab64-b6d3-4831-aa89-41a48062e53e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.533391654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.539058388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.539195095Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3541f2da52c9fd9a470a0563945b8c818f3c81e68e5fbefa8d672102cb14d432/merged/etc/passwd: no such file or directory"
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.539217126Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3541f2da52c9fd9a470a0563945b8c818f3c81e68e5fbefa8d672102cb14d432/merged/etc/group: no such file or directory"
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.539406752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.569956504Z" level=info msg="Created container 61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145: kube-system/storage-provisioner/storage-provisioner" id=24a0ab64-b6d3-4831-aa89-41a48062e53e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.570528566Z" level=info msg="Starting container: 61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145" id=5c498fea-800d-4d01-97d5-c4971971176c name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.572676545Z" level=info msg="Started container" PID=1777 containerID=61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145 description=kube-system/storage-provisioner/storage-provisioner id=5c498fea-800d-4d01-97d5-c4971971176c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ba8287127a1f7647a1e3b8189fbcca801f291afd13aca8800d12bbb5e88ea036
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	61185cadc62b3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   ba8287127a1f7       storage-provisioner                          kube-system
	c0c0884e326a4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   453fb6d7a6d5c       dashboard-metrics-scraper-6ffb444bf9-dttsr   kubernetes-dashboard
	55fee4ca21f23       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   4fc9909d2de11       kubernetes-dashboard-855c9754f9-qgrpj        kubernetes-dashboard
	ac0b7af3de031       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   78541c5744ed6       coredns-66bc5c9577-sk4dm                     kube-system
	1c9fe02fb40b8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   58a00080213f7       busybox                                      default
	bdcdd90996327       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           49 seconds ago      Running             kube-proxy                  0                   ab60549e2bb23       kube-proxy-ldzjl                             kube-system
	c623235e88714       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   ba8287127a1f7       storage-provisioner                          kube-system
	61b6510c4ee0e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   1c64f0b37b1c2       kindnet-m72mz                                kube-system
	9a18851e4fed4       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   a9c2fedcc566c       etcd-embed-certs-485234                      kube-system
	a25f764bedd8c       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           52 seconds ago      Running             kube-scheduler              0                   78c08012de8ad       kube-scheduler-embed-certs-485234            kube-system
	c005019871649       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           52 seconds ago      Running             kube-apiserver              0                   eaf71cf8baf4b       kube-apiserver-embed-certs-485234            kube-system
	6bedda73910b6       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           52 seconds ago      Running             kube-controller-manager     0                   887bfcb44e8c2       kube-controller-manager-embed-certs-485234   kube-system
	
	
	==> coredns [ac0b7af3de0317caddf4d550c0e9ea234551e5f50c9fc7ea462dc8bc6b281b6d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47102 - 64650 "HINFO IN 3800287044420242770.7883161424784710309. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.10402295s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-485234
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-485234
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=embed-certs-485234
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_37_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:37:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-485234
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:39:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:38:55 +0000   Tue, 09 Dec 2025 02:37:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:38:55 +0000   Tue, 09 Dec 2025 02:37:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:38:55 +0000   Tue, 09 Dec 2025 02:37:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:38:55 +0000   Tue, 09 Dec 2025 02:37:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-485234
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                e57d68b0-a212-4022-b9d5-5572cf2bedcf
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-sk4dm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-485234                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-m72mz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-485234             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-485234    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-ldzjl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-485234             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dttsr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qgrpj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node embed-certs-485234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node embed-certs-485234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node embed-certs-485234 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node embed-certs-485234 event: Registered Node embed-certs-485234 in Controller
	  Normal  NodeReady                93s                kubelet          Node embed-certs-485234 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node embed-certs-485234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node embed-certs-485234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)  kubelet          Node embed-certs-485234 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node embed-certs-485234 event: Registered Node embed-certs-485234 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [9a18851e4fed459b0910fdd3ea91834db962f9676a200db349876cbe34a7a2dc] <==
	{"level":"warn","ts":"2025-12-09T02:38:23.703460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.710133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.716984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.724797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.731314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.738680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.746258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.753862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.768790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.776118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.788911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.795246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.802786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.810664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.818736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.826184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.833690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.839946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.846458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.852980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.872122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.880412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.888575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.943504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53528","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T02:38:38.890317Z","caller":"traceutil/trace.go:172","msg":"trace[150456985] transaction","detail":"{read_only:false; response_revision:596; number_of_response:1; }","duration":"133.880872ms","start":"2025-12-09T02:38:38.756416Z","end":"2025-12-09T02:38:38.890297Z","steps":["trace[150456985] 'process raft request'  (duration: 104.67944ms)","trace[150456985] 'compare'  (duration: 29.109567ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:39:15 up  1:21,  0 user,  load average: 5.29, 3.50, 2.30
	Linux embed-certs-485234 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [61b6510c4ee0e600ee2d8713affc5230566f95c3c62e347aabb29817104c56a8] <==
	I1209 02:38:25.976625       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:38:25.976939       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1209 02:38:25.977148       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:38:25.977175       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:38:25.977196       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:38:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:38:26.258444       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:38:26.258467       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:38:26.258479       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:38:26.258757       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:38:26.559709       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:38:26.559751       1 metrics.go:72] Registering metrics
	I1209 02:38:26.559839       1 controller.go:711] "Syncing nftables rules"
	I1209 02:38:36.259128       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:38:36.259203       1 main.go:301] handling current node
	I1209 02:38:46.260808       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:38:46.260837       1 main.go:301] handling current node
	I1209 02:38:56.259333       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:38:56.259379       1 main.go:301] handling current node
	I1209 02:39:06.264704       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:39:06.264750       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c005019871649c13a8dc79cc3b49d854c135ac71f085513bec085b210e679265] <==
	I1209 02:38:24.552398       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1209 02:38:24.552872       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1209 02:38:24.552905       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 02:38:24.556655       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 02:38:24.556676       1 aggregator.go:171] initial CRD sync complete...
	I1209 02:38:24.556683       1 autoregister_controller.go:144] Starting autoregister controller
	I1209 02:38:24.556688       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 02:38:24.556694       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:38:24.554113       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 02:38:24.557240       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 02:38:24.554254       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1209 02:38:24.564783       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 02:38:24.594227       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:38:24.612400       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:38:24.960917       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:38:24.986893       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:38:25.003737       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:38:25.010289       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:38:25.015862       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:38:25.046344       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.246.121"}
	I1209 02:38:25.058917       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.102.59"}
	I1209 02:38:25.446442       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:38:28.109258       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:38:28.358360       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:38:28.507718       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6bedda73910b696dcf23480b8a56d9ad573984aa03ac183ea9091d6bdc9f522e] <==
	I1209 02:38:27.879768       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1209 02:38:27.879776       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 02:38:27.882118       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1209 02:38:27.885432       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1209 02:38:27.887680       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 02:38:27.890119       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1209 02:38:27.895501       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 02:38:27.898859       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 02:38:27.901149       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1209 02:38:27.904428       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 02:38:27.904459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1209 02:38:27.904560       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:38:27.905609       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 02:38:27.905661       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1209 02:38:27.905680       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1209 02:38:27.905711       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 02:38:27.905864       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 02:38:27.907352       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1209 02:38:27.908442       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1209 02:38:27.910693       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 02:38:27.910839       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:38:27.913002       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1209 02:38:27.919285       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1209 02:38:27.921522       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 02:38:27.935852       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bdcdd909963275234ca5ab86ece711497b2c83edef0f3bf455c0278f574ab64e] <==
	I1209 02:38:25.812657       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:38:25.884587       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:38:25.985233       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:38:25.985273       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1209 02:38:25.985376       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:38:26.010821       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:38:26.010883       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:38:26.017127       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:38:26.017571       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:38:26.017972       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:38:26.021432       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:38:26.021461       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:38:26.022136       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:38:26.024084       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:38:26.022733       1 config.go:200] "Starting service config controller"
	I1209 02:38:26.024123       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:38:26.024070       1 config.go:309] "Starting node config controller"
	I1209 02:38:26.024148       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:38:26.024154       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:38:26.121672       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:38:26.124349       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:38:26.124360       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a25f764bedd8c070035e47208797683eec3e7707b255c4203f6216099003061b] <==
	I1209 02:38:23.504220       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:38:24.495271       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:38:24.495781       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:38:24.496168       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:38:24.496332       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:38:24.524392       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 02:38:24.524484       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:38:24.539762       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:38:24.539851       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:38:24.542097       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:38:24.542188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:38:24.640015       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 02:38:28 embed-certs-485234 kubelet[733]: I1209 02:38:28.540121     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsgq6\" (UniqueName: \"kubernetes.io/projected/203ce5c0-481b-4ec6-afe4-db17c646a2ae-kube-api-access-zsgq6\") pod \"kubernetes-dashboard-855c9754f9-qgrpj\" (UID: \"203ce5c0-481b-4ec6-afe4-db17c646a2ae\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qgrpj"
	Dec 09 02:38:28 embed-certs-485234 kubelet[733]: I1209 02:38:28.540239     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5648\" (UniqueName: \"kubernetes.io/projected/8782590a-10e3-436f-8acc-0e0f4c95c53b-kube-api-access-h5648\") pod \"dashboard-metrics-scraper-6ffb444bf9-dttsr\" (UID: \"8782590a-10e3-436f-8acc-0e0f4c95c53b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr"
	Dec 09 02:38:28 embed-certs-485234 kubelet[733]: I1209 02:38:28.540299     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8782590a-10e3-436f-8acc-0e0f4c95c53b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-dttsr\" (UID: \"8782590a-10e3-436f-8acc-0e0f4c95c53b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr"
	Dec 09 02:38:31 embed-certs-485234 kubelet[733]: I1209 02:38:31.453087     733 scope.go:117] "RemoveContainer" containerID="a0da432a0df376772dd4239d098b418cef1af83e5f9a275153cbd3403ed839e0"
	Dec 09 02:38:32 embed-certs-485234 kubelet[733]: I1209 02:38:32.457578     733 scope.go:117] "RemoveContainer" containerID="a0da432a0df376772dd4239d098b418cef1af83e5f9a275153cbd3403ed839e0"
	Dec 09 02:38:32 embed-certs-485234 kubelet[733]: I1209 02:38:32.457966     733 scope.go:117] "RemoveContainer" containerID="71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577"
	Dec 09 02:38:32 embed-certs-485234 kubelet[733]: E1209 02:38:32.458138     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:38:33 embed-certs-485234 kubelet[733]: I1209 02:38:33.461970     733 scope.go:117] "RemoveContainer" containerID="71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577"
	Dec 09 02:38:33 embed-certs-485234 kubelet[733]: E1209 02:38:33.462148     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:38:34 embed-certs-485234 kubelet[733]: I1209 02:38:34.465122     733 scope.go:117] "RemoveContainer" containerID="71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577"
	Dec 09 02:38:34 embed-certs-485234 kubelet[733]: E1209 02:38:34.465364     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:38:35 embed-certs-485234 kubelet[733]: I1209 02:38:35.485204     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qgrpj" podStartSLOduration=1.362615845 podStartE2EDuration="7.485178338s" podCreationTimestamp="2025-12-09 02:38:28 +0000 UTC" firstStartedPulling="2025-12-09 02:38:28.822697325 +0000 UTC m=+6.517920206" lastFinishedPulling="2025-12-09 02:38:34.945259821 +0000 UTC m=+12.640482699" observedRunningTime="2025-12-09 02:38:35.484518742 +0000 UTC m=+13.179741651" watchObservedRunningTime="2025-12-09 02:38:35.485178338 +0000 UTC m=+13.180401238"
	Dec 09 02:38:49 embed-certs-485234 kubelet[733]: I1209 02:38:49.392408     733 scope.go:117] "RemoveContainer" containerID="71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577"
	Dec 09 02:38:49 embed-certs-485234 kubelet[733]: I1209 02:38:49.508292     733 scope.go:117] "RemoveContainer" containerID="71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577"
	Dec 09 02:38:49 embed-certs-485234 kubelet[733]: I1209 02:38:49.508555     733 scope.go:117] "RemoveContainer" containerID="c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0"
	Dec 09 02:38:49 embed-certs-485234 kubelet[733]: E1209 02:38:49.508764     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:38:52 embed-certs-485234 kubelet[733]: I1209 02:38:52.939689     733 scope.go:117] "RemoveContainer" containerID="c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0"
	Dec 09 02:38:52 embed-certs-485234 kubelet[733]: E1209 02:38:52.939933     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:38:56 embed-certs-485234 kubelet[733]: I1209 02:38:56.530838     733 scope.go:117] "RemoveContainer" containerID="c623235e887143b6c59c75b4efff2a8935ff6e87604fbf00895ae925bf1ea296"
	Dec 09 02:39:08 embed-certs-485234 kubelet[733]: I1209 02:39:08.393294     733 scope.go:117] "RemoveContainer" containerID="c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0"
	Dec 09 02:39:08 embed-certs-485234 kubelet[733]: E1209 02:39:08.394045     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:39:12 embed-certs-485234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:39:12 embed-certs-485234 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:39:12 embed-certs-485234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 02:39:12 embed-certs-485234 systemd[1]: kubelet.service: Consumed 1.598s CPU time.
	
	
	==> kubernetes-dashboard [55fee4ca21f23f4f6a1737ed43fa72fae1199f3a8dee15cbb2ccf0b489ae0266] <==
	2025/12/09 02:38:35 Starting overwatch
	2025/12/09 02:38:35 Using namespace: kubernetes-dashboard
	2025/12/09 02:38:35 Using in-cluster config to connect to apiserver
	2025/12/09 02:38:35 Using secret token for csrf signing
	2025/12/09 02:38:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/09 02:38:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/09 02:38:35 Successful initial request to the apiserver, version: v1.34.2
	2025/12/09 02:38:35 Generating JWE encryption key
	2025/12/09 02:38:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/09 02:38:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/09 02:38:35 Initializing JWE encryption key from synchronized object
	2025/12/09 02:38:35 Creating in-cluster Sidecar client
	2025/12/09 02:38:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:38:35 Serving insecurely on HTTP port: 9090
	2025/12/09 02:39:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145] <==
	I1209 02:38:56.586352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:38:56.593781       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:38:56.593877       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:38:56.596776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:00.052352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:04.312687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:07.911386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:11.048772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:14.071302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:14.076923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:39:14.077086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:39:14.077224       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5aeea5d1-f9d5-472a-8ee4-5bcc362f6ec9", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-485234_0bcb9ab9-5a79-4b9d-ba64-5335bc767f77 became leader
	I1209 02:39:14.077268       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-485234_0bcb9ab9-5a79-4b9d-ba64-5335bc767f77!
	W1209 02:39:14.079400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:14.084601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:39:14.177840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-485234_0bcb9ab9-5a79-4b9d-ba64-5335bc767f77!
	
	
	==> storage-provisioner [c623235e887143b6c59c75b4efff2a8935ff6e87604fbf00895ae925bf1ea296] <==
	I1209 02:38:25.772239       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:38:55.774804       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-485234 -n embed-certs-485234
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-485234 -n embed-certs-485234: exit status 2 (348.285319ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-485234 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-485234
helpers_test.go:243: (dbg) docker inspect embed-certs-485234:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a",
	        "Created": "2025-12-09T02:37:10.901046477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 332720,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T02:38:15.535415853Z",
	            "FinishedAt": "2025-12-09T02:38:13.072862021Z"
	        },
	        "Image": "sha256:95ab0aa37c4ecbd07c950f85659128f53c511d233664b1bc11ed61c7de785d96",
	        "ResolvConfPath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/hosts",
	        "LogPath": "/var/lib/docker/containers/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a/2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a-json.log",
	        "Name": "/embed-certs-485234",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-485234:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-485234",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2220a87a139408ac5df2a820fa1783bee0e71bf1e37d9157a2a7efd764306d4a",
	                "LowerDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004-init/diff:/var/lib/docker/overlay2/0fc82a6f5b0ec8890572ba4cea85d1120ba3059ffd7c28b80c19dd8ca688ec4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004/merged",
	                "UpperDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004/diff",
	                "WorkDir": "/var/lib/docker/overlay2/754c009276f320a9bb890b0e6665ee7bbe26530212ce8d29819c69cbd4c5d004/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-485234",
	                "Source": "/var/lib/docker/volumes/embed-certs-485234/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-485234",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-485234",
	                "name.minikube.sigs.k8s.io": "embed-certs-485234",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "564096b9f2091367c7b8488d5d46973e5fcbd32d9d85fbe583fe3fa465353b85",
	            "SandboxKey": "/var/run/docker/netns/564096b9f209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-485234": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "65c970efd44f13df8727d193873c6259ce2c56f73ef1221ef78d5983f99951ba",
	                    "EndpointID": "420b7aed494d57f523dd904ad3be55b3ee601dcef4eb120f99bb43b76fe7d4f6",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "1e:a2:1b:b1:38:f6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-485234",
	                        "2220a87a1394"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-485234 -n embed-certs-485234
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-485234 -n embed-certs-485234: exit status 2 (343.85728ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-485234 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-485234 logs -n 25: (1.078324883s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-933067 sudo systemctl cat kubelet --no-pager                                                                                   │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │ 09 Dec 25 02:38 UTC │
	│ ssh     │ -p calico-933067 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │ 09 Dec 25 02:38 UTC │
	│ ssh     │ -p calico-933067 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │ 09 Dec 25 02:38 UTC │
	│ ssh     │ -p calico-933067 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │ 09 Dec 25 02:38 UTC │
	│ ssh     │ -p calico-933067 sudo systemctl status docker --all --full --no-pager                                                                    │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │                     │
	│ ssh     │ -p calico-933067 sudo systemctl cat docker --no-pager                                                                                    │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │ 09 Dec 25 02:38 UTC │
	│ ssh     │ -p calico-933067 sudo cat /etc/docker/daemon.json                                                                                        │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │                     │
	│ ssh     │ -p calico-933067 sudo docker system info                                                                                                 │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │                     │
	│ ssh     │ -p calico-933067 sudo systemctl status cri-docker --all --full --no-pager                                                                │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:38 UTC │                     │
	│ ssh     │ -p calico-933067 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │                     │
	│ ssh     │ -p calico-933067 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo cri-dockerd --version                                                                                              │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo systemctl status containerd --all --full --no-pager                                                                │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │                     │
	│ ssh     │ -p calico-933067 sudo systemctl cat containerd --no-pager                                                                                │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo cat /lib/systemd/system/containerd.service                                                                         │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo cat /etc/containerd/config.toml                                                                                    │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo containerd config dump                                                                                             │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo systemctl status crio --all --full --no-pager                                                                      │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo systemctl cat crio --no-pager                                                                                      │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ ssh     │ -p calico-933067 sudo crio config                                                                                                        │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ delete  │ -p calico-933067                                                                                                                         │ calico-933067      │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ start   │ -p flannel-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio │ flannel-933067     │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │                     │
	│ image   │ embed-certs-485234 image list --format=json                                                                                              │ embed-certs-485234 │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │ 09 Dec 25 02:39 UTC │
	│ pause   │ -p embed-certs-485234 --alsologtostderr -v=1                                                                                             │ embed-certs-485234 │ jenkins │ v1.37.0 │ 09 Dec 25 02:39 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:39:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:39:06.694118  353996 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:39:06.694389  353996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:39:06.694398  353996 out.go:374] Setting ErrFile to fd 2...
	I1209 02:39:06.694402  353996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:39:06.694590  353996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:39:06.695111  353996 out.go:368] Setting JSON to false
	I1209 02:39:06.696442  353996 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4896,"bootTime":1765243051,"procs":423,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:39:06.696508  353996 start.go:143] virtualization: kvm guest
	I1209 02:39:06.698379  353996 out.go:179] * [flannel-933067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:39:06.699606  353996 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:39:06.699609  353996 notify.go:221] Checking for updates...
	I1209 02:39:06.700920  353996 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:39:06.702119  353996 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:39:06.703797  353996 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:39:06.705077  353996 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:39:06.706354  353996 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:39:06.709255  353996 config.go:182] Loaded profile config "custom-flannel-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:39:06.709386  353996 config.go:182] Loaded profile config "embed-certs-485234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:39:06.709513  353996 config.go:182] Loaded profile config "enable-default-cni-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:39:06.709648  353996 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:39:06.735215  353996 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:39:06.735329  353996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:39:06.795271  353996 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-09 02:39:06.784678394 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:39:06.795379  353996 docker.go:319] overlay module found
	I1209 02:39:06.797138  353996 out.go:179] * Using the docker driver based on user configuration
	I1209 02:39:06.798318  353996 start.go:309] selected driver: docker
	I1209 02:39:06.798333  353996 start.go:927] validating driver "docker" against <nil>
	I1209 02:39:06.798343  353996 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:39:06.799107  353996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:39:06.867982  353996 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-09 02:39:06.85747639 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:39:06.868200  353996 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 02:39:06.868497  353996 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 02:39:06.870094  353996 out.go:179] * Using Docker driver with root privileges
	I1209 02:39:06.872123  353996 cni.go:84] Creating CNI manager for "flannel"
	I1209 02:39:06.872148  353996 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1209 02:39:06.872224  353996 start.go:353] cluster config:
	{Name:flannel-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-933067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:39:06.873549  353996 out.go:179] * Starting "flannel-933067" primary control-plane node in "flannel-933067" cluster
	I1209 02:39:06.874590  353996 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 02:39:06.875971  353996 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 02:39:06.877068  353996 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:39:06.877103  353996 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 02:39:06.877116  353996 cache.go:65] Caching tarball of preloaded images
	I1209 02:39:06.877171  353996 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 02:39:06.877212  353996 preload.go:238] Found /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 02:39:06.877230  353996 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 02:39:06.877358  353996 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/flannel-933067/config.json ...
	I1209 02:39:06.877385  353996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/flannel-933067/config.json: {Name:mk540d9d59c959672c7e95943fda0330a7701480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 02:39:06.903888  353996 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 02:39:06.903911  353996 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 02:39:06.903931  353996 cache.go:243] Successfully downloaded all kic artifacts
	I1209 02:39:06.903965  353996 start.go:360] acquireMachinesLock for flannel-933067: {Name:mk8839338c3c46860e97c16dfe24b20d0b3adaa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 02:39:06.904068  353996 start.go:364] duration metric: took 79.684µs to acquireMachinesLock for "flannel-933067"
	I1209 02:39:06.904105  353996 start.go:93] Provisioning new machine with config: &{Name:flannel-933067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-933067 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 02:39:06.904200  353996 start.go:125] createHost starting for "" (driver="docker")
	I1209 02:39:06.441036  346776 out.go:252]   - Booting up control plane ...
	I1209 02:39:06.441182  346776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 02:39:06.441324  346776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 02:39:06.442378  346776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 02:39:06.458615  346776 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 02:39:06.458868  346776 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 02:39:06.467128  346776 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 02:39:06.467467  346776 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 02:39:06.467518  346776 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 02:39:06.563003  346776 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 02:39:06.563188  346776 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 02:39:07.564652  346776 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001854884s
	I1209 02:39:07.567943  346776 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 02:39:07.568118  346776 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1209 02:39:07.568224  346776 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 02:39:07.568318  346776 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 02:39:06.192312  341866 addons.go:530] duration metric: took 556.38228ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 02:39:06.484812  341866 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-933067" context rescaled to 1 replicas
	W1209 02:39:07.990156  341866 node_ready.go:57] node "custom-flannel-933067" has "Ready":"False" status (will retry)
	I1209 02:39:06.905968  353996 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 02:39:06.906242  353996 start.go:159] libmachine.API.Create for "flannel-933067" (driver="docker")
	I1209 02:39:06.906282  353996 client.go:173] LocalClient.Create starting
	I1209 02:39:06.906359  353996 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem
	I1209 02:39:06.906401  353996 main.go:143] libmachine: Decoding PEM data...
	I1209 02:39:06.906429  353996 main.go:143] libmachine: Parsing certificate...
	I1209 02:39:06.906495  353996 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem
	I1209 02:39:06.906523  353996 main.go:143] libmachine: Decoding PEM data...
	I1209 02:39:06.906541  353996 main.go:143] libmachine: Parsing certificate...
	I1209 02:39:06.907005  353996 cli_runner.go:164] Run: docker network inspect flannel-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 02:39:06.926440  353996 cli_runner.go:211] docker network inspect flannel-933067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 02:39:06.926494  353996 network_create.go:284] running [docker network inspect flannel-933067] to gather additional debugging logs...
	I1209 02:39:06.926517  353996 cli_runner.go:164] Run: docker network inspect flannel-933067
	W1209 02:39:06.945180  353996 cli_runner.go:211] docker network inspect flannel-933067 returned with exit code 1
	I1209 02:39:06.945214  353996 network_create.go:287] error running [docker network inspect flannel-933067]: docker network inspect flannel-933067: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-933067 not found
	I1209 02:39:06.945233  353996 network_create.go:289] output of [docker network inspect flannel-933067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-933067 not found
	
	** /stderr **
	I1209 02:39:06.945361  353996 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 02:39:06.965357  353996 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
	I1209 02:39:06.965978  353996 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb5d2d0ced9f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:9a:05:06:39:c4} reservation:<nil>}
	I1209 02:39:06.966876  353996 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bb004f121aef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ee:28:8a:93:4c} reservation:<nil>}
	I1209 02:39:06.968004  353996 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d05b99ab678b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fa:0c:2a:78:b3:03} reservation:<nil>}
	I1209 02:39:06.968876  353996 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-104636b6d5da IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:83:e2:02:a0:69} reservation:<nil>}
	I1209 02:39:06.969539  353996 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-65c970efd44f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:7e:8e:00:ff:ef:6f} reservation:<nil>}
	I1209 02:39:06.970697  353996 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec7a50}
	I1209 02:39:06.970724  353996 network_create.go:124] attempt to create docker network flannel-933067 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1209 02:39:06.970784  353996 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-933067 flannel-933067
	I1209 02:39:07.028353  353996 network_create.go:108] docker network flannel-933067 192.168.103.0/24 created
	I1209 02:39:07.028383  353996 kic.go:121] calculated static IP "192.168.103.2" for the "flannel-933067" container
	I1209 02:39:07.028445  353996 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 02:39:07.048303  353996 cli_runner.go:164] Run: docker volume create flannel-933067 --label name.minikube.sigs.k8s.io=flannel-933067 --label created_by.minikube.sigs.k8s.io=true
	I1209 02:39:07.069306  353996 oci.go:103] Successfully created a docker volume flannel-933067
	I1209 02:39:07.069393  353996 cli_runner.go:164] Run: docker run --rm --name flannel-933067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-933067 --entrypoint /usr/bin/test -v flannel-933067:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 02:39:07.544379  353996 oci.go:107] Successfully prepared a docker volume flannel-933067
	I1209 02:39:07.544465  353996 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 02:39:07.544480  353996 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 02:39:07.544582  353996 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v flannel-933067:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 02:39:10.052595  346776 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.484574752s
	I1209 02:39:10.721619  346776 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.153674559s
	I1209 02:39:13.070261  346776 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502303136s
	I1209 02:39:13.086962  346776 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 02:39:13.096846  346776 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 02:39:13.107967  346776 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 02:39:13.108245  346776 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-933067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 02:39:13.122683  346776 kubeadm.go:319] [bootstrap-token] Using token: lm3e74.zdygmvr5eabou70e
	I1209 02:39:13.123832  346776 out.go:252]   - Configuring RBAC rules ...
	I1209 02:39:13.124000  346776 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 02:39:13.127503  346776 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 02:39:13.133375  346776 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 02:39:13.135837  346776 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 02:39:13.138185  346776 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 02:39:13.140783  346776 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 02:39:13.476580  346776 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 02:39:13.892189  346776 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 02:39:14.476408  346776 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 02:39:14.477328  346776 kubeadm.go:319] 
	I1209 02:39:14.477388  346776 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 02:39:14.477401  346776 kubeadm.go:319] 
	I1209 02:39:14.477499  346776 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 02:39:14.477522  346776 kubeadm.go:319] 
	I1209 02:39:14.477564  346776 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 02:39:14.477626  346776 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 02:39:14.477719  346776 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 02:39:14.477735  346776 kubeadm.go:319] 
	I1209 02:39:14.477792  346776 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 02:39:14.477810  346776 kubeadm.go:319] 
	I1209 02:39:14.477893  346776 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 02:39:14.477905  346776 kubeadm.go:319] 
	I1209 02:39:14.478004  346776 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 02:39:14.478109  346776 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 02:39:14.478207  346776 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 02:39:14.478217  346776 kubeadm.go:319] 
	I1209 02:39:14.478360  346776 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 02:39:14.478490  346776 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 02:39:14.478499  346776 kubeadm.go:319] 
	I1209 02:39:14.478656  346776 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lm3e74.zdygmvr5eabou70e \
	I1209 02:39:14.478790  346776 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf \
	I1209 02:39:14.478840  346776 kubeadm.go:319] 	--control-plane 
	I1209 02:39:14.478850  346776 kubeadm.go:319] 
	I1209 02:39:14.478955  346776 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 02:39:14.478963  346776 kubeadm.go:319] 
	I1209 02:39:14.479064  346776 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lm3e74.zdygmvr5eabou70e \
	I1209 02:39:14.479205  346776 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d3fba6b5f901ac5b7c340e09389541b38acfe40319cf3366cc5289491dfc7cdf 
	I1209 02:39:14.481950  346776 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1209 02:39:14.482111  346776 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 02:39:14.482145  346776 cni.go:84] Creating CNI manager for "bridge"
	I1209 02:39:14.483475  346776 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 02:39:09.987678  341866 node_ready.go:49] node "custom-flannel-933067" is "Ready"
	I1209 02:39:09.987773  341866 node_ready.go:38] duration metric: took 4.006977831s for node "custom-flannel-933067" to be "Ready" ...
	I1209 02:39:09.988076  341866 api_server.go:52] waiting for apiserver process to appear ...
	I1209 02:39:09.988139  341866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:39:10.014283  341866 api_server.go:72] duration metric: took 4.378495846s to wait for apiserver process to appear ...
	I1209 02:39:10.014318  341866 api_server.go:88] waiting for apiserver healthz status ...
	I1209 02:39:10.014340  341866 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1209 02:39:10.022768  341866 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1209 02:39:10.024372  341866 api_server.go:141] control plane version: v1.34.2
	I1209 02:39:10.024405  341866 api_server.go:131] duration metric: took 10.078941ms to wait for apiserver health ...
	I1209 02:39:10.024417  341866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 02:39:10.030294  341866 system_pods.go:59] 7 kube-system pods found
	I1209 02:39:10.030344  341866 system_pods.go:61] "coredns-66bc5c9577-jxzng" [0e39769f-ee89-4a38-bcd4-315775516f22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:39:10.030354  341866 system_pods.go:61] "etcd-custom-flannel-933067" [e2ad0e40-8e88-4b41-94bc-9b3589cbfd66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:39:10.030364  341866 system_pods.go:61] "kube-apiserver-custom-flannel-933067" [3cac2dcc-9ab2-46a1-a80e-79ee3f24ece5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 02:39:10.030372  341866 system_pods.go:61] "kube-controller-manager-custom-flannel-933067" [c7339238-635b-47d2-8d99-5b653bfcdcb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:39:10.030378  341866 system_pods.go:61] "kube-proxy-qlj7v" [309808bb-2f2f-4929-8340-24d36324b618] Running
	I1209 02:39:10.030495  341866 system_pods.go:61] "kube-scheduler-custom-flannel-933067" [cf11f066-552e-4a85-98b5-126acb59b492] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 02:39:10.030503  341866 system_pods.go:61] "storage-provisioner" [ed4661e8-fbb0-47d7-8cc4-cd78f3ca730b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:39:10.030511  341866 system_pods.go:74] duration metric: took 6.087264ms to wait for pod list to return data ...
	I1209 02:39:10.030522  341866 default_sa.go:34] waiting for default service account to be created ...
	I1209 02:39:10.034518  341866 default_sa.go:45] found service account: "default"
	I1209 02:39:10.034538  341866 default_sa.go:55] duration metric: took 4.010558ms for default service account to be created ...
	I1209 02:39:10.034547  341866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 02:39:10.037743  341866 system_pods.go:86] 7 kube-system pods found
	I1209 02:39:10.037770  341866 system_pods.go:89] "coredns-66bc5c9577-jxzng" [0e39769f-ee89-4a38-bcd4-315775516f22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:39:10.037779  341866 system_pods.go:89] "etcd-custom-flannel-933067" [e2ad0e40-8e88-4b41-94bc-9b3589cbfd66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:39:10.037808  341866 system_pods.go:89] "kube-apiserver-custom-flannel-933067" [3cac2dcc-9ab2-46a1-a80e-79ee3f24ece5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 02:39:10.037817  341866 system_pods.go:89] "kube-controller-manager-custom-flannel-933067" [c7339238-635b-47d2-8d99-5b653bfcdcb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:39:10.037834  341866 system_pods.go:89] "kube-proxy-qlj7v" [309808bb-2f2f-4929-8340-24d36324b618] Running
	I1209 02:39:10.037842  341866 system_pods.go:89] "kube-scheduler-custom-flannel-933067" [cf11f066-552e-4a85-98b5-126acb59b492] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 02:39:10.037849  341866 system_pods.go:89] "storage-provisioner" [ed4661e8-fbb0-47d7-8cc4-cd78f3ca730b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:39:10.037871  341866 retry.go:31] will retry after 210.368355ms: missing components: kube-dns
	I1209 02:39:10.252269  341866 system_pods.go:86] 7 kube-system pods found
	I1209 02:39:10.252300  341866 system_pods.go:89] "coredns-66bc5c9577-jxzng" [0e39769f-ee89-4a38-bcd4-315775516f22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:39:10.252307  341866 system_pods.go:89] "etcd-custom-flannel-933067" [e2ad0e40-8e88-4b41-94bc-9b3589cbfd66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:39:10.252316  341866 system_pods.go:89] "kube-apiserver-custom-flannel-933067" [3cac2dcc-9ab2-46a1-a80e-79ee3f24ece5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 02:39:10.252324  341866 system_pods.go:89] "kube-controller-manager-custom-flannel-933067" [c7339238-635b-47d2-8d99-5b653bfcdcb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:39:10.252328  341866 system_pods.go:89] "kube-proxy-qlj7v" [309808bb-2f2f-4929-8340-24d36324b618] Running
	I1209 02:39:10.252333  341866 system_pods.go:89] "kube-scheduler-custom-flannel-933067" [cf11f066-552e-4a85-98b5-126acb59b492] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 02:39:10.252338  341866 system_pods.go:89] "storage-provisioner" [ed4661e8-fbb0-47d7-8cc4-cd78f3ca730b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:39:10.252352  341866 retry.go:31] will retry after 382.785454ms: missing components: kube-dns
	I1209 02:39:10.910963  341866 system_pods.go:86] 7 kube-system pods found
	I1209 02:39:10.911002  341866 system_pods.go:89] "coredns-66bc5c9577-jxzng" [0e39769f-ee89-4a38-bcd4-315775516f22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:39:10.911013  341866 system_pods.go:89] "etcd-custom-flannel-933067" [e2ad0e40-8e88-4b41-94bc-9b3589cbfd66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:39:10.911025  341866 system_pods.go:89] "kube-apiserver-custom-flannel-933067" [3cac2dcc-9ab2-46a1-a80e-79ee3f24ece5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 02:39:10.911035  341866 system_pods.go:89] "kube-controller-manager-custom-flannel-933067" [c7339238-635b-47d2-8d99-5b653bfcdcb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:39:10.911041  341866 system_pods.go:89] "kube-proxy-qlj7v" [309808bb-2f2f-4929-8340-24d36324b618] Running
	I1209 02:39:10.911050  341866 system_pods.go:89] "kube-scheduler-custom-flannel-933067" [cf11f066-552e-4a85-98b5-126acb59b492] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 02:39:10.911057  341866 system_pods.go:89] "storage-provisioner" [ed4661e8-fbb0-47d7-8cc4-cd78f3ca730b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:39:10.911088  341866 retry.go:31] will retry after 431.453854ms: missing components: kube-dns
	I1209 02:39:11.508386  341866 system_pods.go:86] 7 kube-system pods found
	I1209 02:39:11.508427  341866 system_pods.go:89] "coredns-66bc5c9577-jxzng" [0e39769f-ee89-4a38-bcd4-315775516f22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:39:11.508437  341866 system_pods.go:89] "etcd-custom-flannel-933067" [e2ad0e40-8e88-4b41-94bc-9b3589cbfd66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:39:11.508450  341866 system_pods.go:89] "kube-apiserver-custom-flannel-933067" [3cac2dcc-9ab2-46a1-a80e-79ee3f24ece5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 02:39:11.508459  341866 system_pods.go:89] "kube-controller-manager-custom-flannel-933067" [c7339238-635b-47d2-8d99-5b653bfcdcb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:39:11.508466  341866 system_pods.go:89] "kube-proxy-qlj7v" [309808bb-2f2f-4929-8340-24d36324b618] Running
	I1209 02:39:11.508477  341866 system_pods.go:89] "kube-scheduler-custom-flannel-933067" [cf11f066-552e-4a85-98b5-126acb59b492] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 02:39:11.508485  341866 system_pods.go:89] "storage-provisioner" [ed4661e8-fbb0-47d7-8cc4-cd78f3ca730b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:39:11.508504  341866 retry.go:31] will retry after 586.934014ms: missing components: kube-dns
	I1209 02:39:12.099364  341866 system_pods.go:86] 7 kube-system pods found
	I1209 02:39:12.099406  341866 system_pods.go:89] "coredns-66bc5c9577-jxzng" [0e39769f-ee89-4a38-bcd4-315775516f22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:39:12.099415  341866 system_pods.go:89] "etcd-custom-flannel-933067" [e2ad0e40-8e88-4b41-94bc-9b3589cbfd66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:39:12.099421  341866 system_pods.go:89] "kube-apiserver-custom-flannel-933067" [3cac2dcc-9ab2-46a1-a80e-79ee3f24ece5] Running
	I1209 02:39:12.099428  341866 system_pods.go:89] "kube-controller-manager-custom-flannel-933067" [c7339238-635b-47d2-8d99-5b653bfcdcb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:39:12.099432  341866 system_pods.go:89] "kube-proxy-qlj7v" [309808bb-2f2f-4929-8340-24d36324b618] Running
	I1209 02:39:12.099436  341866 system_pods.go:89] "kube-scheduler-custom-flannel-933067" [cf11f066-552e-4a85-98b5-126acb59b492] Running
	I1209 02:39:12.099443  341866 system_pods.go:89] "storage-provisioner" [ed4661e8-fbb0-47d7-8cc4-cd78f3ca730b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 02:39:12.099456  341866 retry.go:31] will retry after 556.333202ms: missing components: kube-dns
	I1209 02:39:12.664032  341866 system_pods.go:86] 7 kube-system pods found
	I1209 02:39:12.664069  341866 system_pods.go:89] "coredns-66bc5c9577-jxzng" [0e39769f-ee89-4a38-bcd4-315775516f22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:39:12.664081  341866 system_pods.go:89] "etcd-custom-flannel-933067" [e2ad0e40-8e88-4b41-94bc-9b3589cbfd66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:39:12.664089  341866 system_pods.go:89] "kube-apiserver-custom-flannel-933067" [3cac2dcc-9ab2-46a1-a80e-79ee3f24ece5] Running
	I1209 02:39:12.664100  341866 system_pods.go:89] "kube-controller-manager-custom-flannel-933067" [c7339238-635b-47d2-8d99-5b653bfcdcb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:39:12.664106  341866 system_pods.go:89] "kube-proxy-qlj7v" [309808bb-2f2f-4929-8340-24d36324b618] Running
	I1209 02:39:12.664112  341866 system_pods.go:89] "kube-scheduler-custom-flannel-933067" [cf11f066-552e-4a85-98b5-126acb59b492] Running
	I1209 02:39:12.664129  341866 system_pods.go:89] "storage-provisioner" [ed4661e8-fbb0-47d7-8cc4-cd78f3ca730b] Running
	I1209 02:39:12.664149  341866 retry.go:31] will retry after 891.510554ms: missing components: kube-dns
	I1209 02:39:13.559830  341866 system_pods.go:86] 7 kube-system pods found
	I1209 02:39:13.559862  341866 system_pods.go:89] "coredns-66bc5c9577-jxzng" [0e39769f-ee89-4a38-bcd4-315775516f22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:39:13.559874  341866 system_pods.go:89] "etcd-custom-flannel-933067" [e2ad0e40-8e88-4b41-94bc-9b3589cbfd66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:39:13.559883  341866 system_pods.go:89] "kube-apiserver-custom-flannel-933067" [3cac2dcc-9ab2-46a1-a80e-79ee3f24ece5] Running
	I1209 02:39:13.559892  341866 system_pods.go:89] "kube-controller-manager-custom-flannel-933067" [c7339238-635b-47d2-8d99-5b653bfcdcb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:39:13.559899  341866 system_pods.go:89] "kube-proxy-qlj7v" [309808bb-2f2f-4929-8340-24d36324b618] Running
	I1209 02:39:13.559905  341866 system_pods.go:89] "kube-scheduler-custom-flannel-933067" [cf11f066-552e-4a85-98b5-126acb59b492] Running
	I1209 02:39:13.559916  341866 system_pods.go:89] "storage-provisioner" [ed4661e8-fbb0-47d7-8cc4-cd78f3ca730b] Running
	I1209 02:39:13.559932  341866 retry.go:31] will retry after 991.329646ms: missing components: kube-dns
	I1209 02:39:14.556878  341866 system_pods.go:86] 7 kube-system pods found
	I1209 02:39:14.556914  341866 system_pods.go:89] "coredns-66bc5c9577-jxzng" [0e39769f-ee89-4a38-bcd4-315775516f22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 02:39:14.556923  341866 system_pods.go:89] "etcd-custom-flannel-933067" [e2ad0e40-8e88-4b41-94bc-9b3589cbfd66] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 02:39:14.556932  341866 system_pods.go:89] "kube-apiserver-custom-flannel-933067" [3cac2dcc-9ab2-46a1-a80e-79ee3f24ece5] Running
	I1209 02:39:14.556941  341866 system_pods.go:89] "kube-controller-manager-custom-flannel-933067" [c7339238-635b-47d2-8d99-5b653bfcdcb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 02:39:14.556947  341866 system_pods.go:89] "kube-proxy-qlj7v" [309808bb-2f2f-4929-8340-24d36324b618] Running
	I1209 02:39:14.556957  341866 system_pods.go:89] "kube-scheduler-custom-flannel-933067" [cf11f066-552e-4a85-98b5-126acb59b492] Running
	I1209 02:39:14.556967  341866 system_pods.go:89] "storage-provisioner" [ed4661e8-fbb0-47d7-8cc4-cd78f3ca730b] Running
	I1209 02:39:14.556986  341866 retry.go:31] will retry after 913.808979ms: missing components: kube-dns
	I1209 02:39:12.228239  353996 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v flannel-933067:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (4.683609331s)
	I1209 02:39:12.228275  353996 kic.go:203] duration metric: took 4.683789765s to extract preloaded images to volume ...
	W1209 02:39:12.228387  353996 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1209 02:39:12.228439  353996 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1209 02:39:12.228484  353996 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 02:39:12.297355  353996 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-933067 --name flannel-933067 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-933067 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-933067 --network flannel-933067 --ip 192.168.103.2 --volume flannel-933067:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 02:39:12.612849  353996 cli_runner.go:164] Run: docker container inspect flannel-933067 --format={{.State.Running}}
	I1209 02:39:12.634698  353996 cli_runner.go:164] Run: docker container inspect flannel-933067 --format={{.State.Status}}
	I1209 02:39:12.659996  353996 cli_runner.go:164] Run: docker exec flannel-933067 stat /var/lib/dpkg/alternatives/iptables
	I1209 02:39:12.716469  353996 oci.go:144] the created container "flannel-933067" has a running status.
	I1209 02:39:12.716501  353996 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/flannel-933067/id_rsa...
	I1209 02:39:12.921187  353996 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-11001/.minikube/machines/flannel-933067/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 02:39:12.953282  353996 cli_runner.go:164] Run: docker container inspect flannel-933067 --format={{.State.Status}}
	I1209 02:39:12.973210  353996 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 02:39:12.973231  353996 kic_runner.go:114] Args: [docker exec --privileged flannel-933067 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 02:39:13.025182  353996 cli_runner.go:164] Run: docker container inspect flannel-933067 --format={{.State.Status}}
	I1209 02:39:13.043923  353996 machine.go:94] provisionDockerMachine start ...
	I1209 02:39:13.044214  353996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-933067
	I1209 02:39:13.061940  353996 main.go:143] libmachine: Using SSH client type: native
	I1209 02:39:13.062231  353996 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1209 02:39:13.062246  353996 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 02:39:13.195892  353996 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-933067
	
	I1209 02:39:13.195933  353996 ubuntu.go:182] provisioning hostname "flannel-933067"
	I1209 02:39:13.195998  353996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-933067
	I1209 02:39:13.220315  353996 main.go:143] libmachine: Using SSH client type: native
	I1209 02:39:13.220587  353996 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1209 02:39:13.220609  353996 main.go:143] libmachine: About to run SSH command:
	sudo hostname flannel-933067 && echo "flannel-933067" | sudo tee /etc/hostname
	I1209 02:39:13.360214  353996 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-933067
	
	I1209 02:39:13.360323  353996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-933067
	I1209 02:39:13.380626  353996 main.go:143] libmachine: Using SSH client type: native
	I1209 02:39:13.380910  353996 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1209 02:39:13.380937  353996 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-933067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-933067/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-933067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 02:39:13.513433  353996 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 02:39:13.513464  353996 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-11001/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-11001/.minikube}
	I1209 02:39:13.513505  353996 ubuntu.go:190] setting up certificates
	I1209 02:39:13.513519  353996 provision.go:84] configureAuth start
	I1209 02:39:13.513579  353996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-933067
	I1209 02:39:13.533510  353996 provision.go:143] copyHostCerts
	I1209 02:39:13.533566  353996 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem, removing ...
	I1209 02:39:13.533581  353996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem
	I1209 02:39:13.533673  353996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/ca.pem (1078 bytes)
	I1209 02:39:13.533780  353996 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem, removing ...
	I1209 02:39:13.533793  353996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem
	I1209 02:39:13.533833  353996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/cert.pem (1123 bytes)
	I1209 02:39:13.533918  353996 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem, removing ...
	I1209 02:39:13.533928  353996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem
	I1209 02:39:13.533963  353996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-11001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-11001/.minikube/key.pem (1679 bytes)
	I1209 02:39:13.534035  353996 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca-key.pem org=jenkins.flannel-933067 san=[127.0.0.1 192.168.103.2 flannel-933067 localhost minikube]
	I1209 02:39:13.571503  353996 provision.go:177] copyRemoteCerts
	I1209 02:39:13.571545  353996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 02:39:13.571604  353996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-933067
	I1209 02:39:13.589956  353996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/flannel-933067/id_rsa Username:docker}
	I1209 02:39:13.688779  353996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 02:39:13.715799  353996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1209 02:39:13.742868  353996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 02:39:13.761813  353996 provision.go:87] duration metric: took 248.280191ms to configureAuth
	I1209 02:39:13.761850  353996 ubuntu.go:206] setting minikube options for container-runtime
	I1209 02:39:13.762037  353996 config.go:182] Loaded profile config "flannel-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:39:13.762190  353996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-933067
	I1209 02:39:13.783844  353996 main.go:143] libmachine: Using SSH client type: native
	I1209 02:39:13.784147  353996 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1209 02:39:13.784178  353996 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 02:39:14.072148  353996 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 02:39:14.072200  353996 machine.go:97] duration metric: took 1.028256873s to provisionDockerMachine
	I1209 02:39:14.072219  353996 client.go:176] duration metric: took 7.165927055s to LocalClient.Create
	I1209 02:39:14.072241  353996 start.go:167] duration metric: took 7.166000115s to libmachine.API.Create "flannel-933067"
	I1209 02:39:14.072254  353996 start.go:293] postStartSetup for "flannel-933067" (driver="docker")
	I1209 02:39:14.072267  353996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 02:39:14.074432  353996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 02:39:14.074488  353996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-933067
	I1209 02:39:14.099367  353996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/flannel-933067/id_rsa Username:docker}
	I1209 02:39:14.194917  353996 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 02:39:14.198670  353996 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 02:39:14.198701  353996 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 02:39:14.198713  353996 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/addons for local assets ...
	I1209 02:39:14.198781  353996 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-11001/.minikube/files for local assets ...
	I1209 02:39:14.198892  353996 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem -> 145522.pem in /etc/ssl/certs
	I1209 02:39:14.199029  353996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 02:39:14.207736  353996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/ssl/certs/145522.pem --> /etc/ssl/certs/145522.pem (1708 bytes)
	I1209 02:39:14.226424  353996 start.go:296] duration metric: took 154.158107ms for postStartSetup
	I1209 02:39:14.226749  353996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-933067
	I1209 02:39:14.244748  353996 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/flannel-933067/config.json ...
	I1209 02:39:14.245016  353996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:39:14.245054  353996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-933067
	I1209 02:39:14.262916  353996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/flannel-933067/id_rsa Username:docker}
	I1209 02:39:14.356145  353996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 02:39:14.361899  353996 start.go:128] duration metric: took 7.457673972s to createHost
	I1209 02:39:14.361922  353996 start.go:83] releasing machines lock for "flannel-933067", held for 7.457832213s
	I1209 02:39:14.362006  353996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-933067
	I1209 02:39:14.382191  353996 ssh_runner.go:195] Run: cat /version.json
	I1209 02:39:14.382228  353996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 02:39:14.382255  353996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-933067
	I1209 02:39:14.382307  353996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-933067
	I1209 02:39:14.402666  353996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/flannel-933067/id_rsa Username:docker}
	I1209 02:39:14.403467  353996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/flannel-933067/id_rsa Username:docker}
	I1209 02:39:14.566744  353996 ssh_runner.go:195] Run: systemctl --version
	I1209 02:39:14.574942  353996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 02:39:14.619782  353996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 02:39:14.625062  353996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 02:39:14.625128  353996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 02:39:14.650865  353996 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 02:39:14.650889  353996 start.go:496] detecting cgroup driver to use...
	I1209 02:39:14.650921  353996 detect.go:190] detected "systemd" cgroup driver on host os
	I1209 02:39:14.650997  353996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 02:39:14.669437  353996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 02:39:14.681758  353996 docker.go:218] disabling cri-docker service (if available) ...
	I1209 02:39:14.681802  353996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 02:39:14.697537  353996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 02:39:14.715121  353996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 02:39:14.820959  353996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 02:39:14.920133  353996 docker.go:234] disabling docker service ...
	I1209 02:39:14.920188  353996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 02:39:14.939630  353996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 02:39:14.953562  353996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 02:39:15.045186  353996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 02:39:15.137190  353996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 02:39:15.152078  353996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 02:39:15.168054  353996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 02:39:15.168119  353996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:39:15.179900  353996 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1209 02:39:15.179957  353996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:39:15.189596  353996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:39:15.198893  353996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:39:15.208335  353996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 02:39:15.216169  353996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:39:15.224690  353996 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:39:15.237426  353996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 02:39:15.245725  353996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 02:39:15.254222  353996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 02:39:15.261513  353996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 02:39:15.352834  353996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 02:39:15.492148  353996 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 02:39:15.492218  353996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 02:39:15.496528  353996 start.go:564] Will wait 60s for crictl version
	I1209 02:39:15.496587  353996 ssh_runner.go:195] Run: which crictl
	I1209 02:39:15.500295  353996 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 02:39:15.525887  353996 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 02:39:15.525988  353996 ssh_runner.go:195] Run: crio --version
	I1209 02:39:15.555169  353996 ssh_runner.go:195] Run: crio --version
	I1209 02:39:15.589083  353996 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 09 02:38:36 embed-certs-485234 crio[567]: time="2025-12-09T02:38:36.277936107Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 02:38:36 embed-certs-485234 crio[567]: time="2025-12-09T02:38:36.281853011Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 02:38:36 embed-certs-485234 crio[567]: time="2025-12-09T02:38:36.281874954Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.393007548Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1076913f-082b-4559-ac2e-182004787f38 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.396347249Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e7bdc1de-a1ce-4ba6-8999-331e054c49da name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.399571392Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr/dashboard-metrics-scraper" id=79b46204-67cc-45c3-a874-a146d84133e9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.399756443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.408167507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.408941055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.440921974Z" level=info msg="Created container c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr/dashboard-metrics-scraper" id=79b46204-67cc-45c3-a874-a146d84133e9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.441582519Z" level=info msg="Starting container: c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0" id=0f0802ad-41ce-4c68-9809-752aa358f681 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.443854847Z" level=info msg="Started container" PID=1762 containerID=c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr/dashboard-metrics-scraper id=0f0802ad-41ce-4c68-9809-752aa358f681 name=/runtime.v1.RuntimeService/StartContainer sandboxID=453fb6d7a6d5cb5c7627c51560b109ce7231bc92ab5c250c2f858c0e0b8cf475
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.509739462Z" level=info msg="Removing container: 71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577" id=9339be47-2c3d-4f99-b0f9-2badee6da668 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:38:49 embed-certs-485234 crio[567]: time="2025-12-09T02:38:49.524599835Z" level=info msg="Removed container 71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr/dashboard-metrics-scraper" id=9339be47-2c3d-4f99-b0f9-2badee6da668 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.531215595Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d0ad61c9-b644-4c22-8132-db5246475783 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.532208601Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f68ffd0d-0018-4a63-b413-6089d45f0a4e name=/runtime.v1.ImageService/ImageStatus
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.533267616Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=24a0ab64-b6d3-4831-aa89-41a48062e53e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.533391654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.539058388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.539195095Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3541f2da52c9fd9a470a0563945b8c818f3c81e68e5fbefa8d672102cb14d432/merged/etc/passwd: no such file or directory"
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.539217126Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3541f2da52c9fd9a470a0563945b8c818f3c81e68e5fbefa8d672102cb14d432/merged/etc/group: no such file or directory"
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.539406752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.569956504Z" level=info msg="Created container 61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145: kube-system/storage-provisioner/storage-provisioner" id=24a0ab64-b6d3-4831-aa89-41a48062e53e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.570528566Z" level=info msg="Starting container: 61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145" id=5c498fea-800d-4d01-97d5-c4971971176c name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 02:38:56 embed-certs-485234 crio[567]: time="2025-12-09T02:38:56.572676545Z" level=info msg="Started container" PID=1777 containerID=61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145 description=kube-system/storage-provisioner/storage-provisioner id=5c498fea-800d-4d01-97d5-c4971971176c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ba8287127a1f7647a1e3b8189fbcca801f291afd13aca8800d12bbb5e88ea036
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	61185cadc62b3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   ba8287127a1f7       storage-provisioner                          kube-system
	c0c0884e326a4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   453fb6d7a6d5c       dashboard-metrics-scraper-6ffb444bf9-dttsr   kubernetes-dashboard
	55fee4ca21f23       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   4fc9909d2de11       kubernetes-dashboard-855c9754f9-qgrpj        kubernetes-dashboard
	ac0b7af3de031       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   78541c5744ed6       coredns-66bc5c9577-sk4dm                     kube-system
	1c9fe02fb40b8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   58a00080213f7       busybox                                      default
	bdcdd90996327       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   ab60549e2bb23       kube-proxy-ldzjl                             kube-system
	c623235e88714       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   ba8287127a1f7       storage-provisioner                          kube-system
	61b6510c4ee0e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   1c64f0b37b1c2       kindnet-m72mz                                kube-system
	9a18851e4fed4       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   a9c2fedcc566c       etcd-embed-certs-485234                      kube-system
	a25f764bedd8c       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   78c08012de8ad       kube-scheduler-embed-certs-485234            kube-system
	c005019871649       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   eaf71cf8baf4b       kube-apiserver-embed-certs-485234            kube-system
	6bedda73910b6       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   887bfcb44e8c2       kube-controller-manager-embed-certs-485234   kube-system
	
	
	==> coredns [ac0b7af3de0317caddf4d550c0e9ea234551e5f50c9fc7ea462dc8bc6b281b6d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47102 - 64650 "HINFO IN 3800287044420242770.7883161424784710309. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.10402295s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-485234
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-485234
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=embed-certs-485234
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_37_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:37:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-485234
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:39:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:38:55 +0000   Tue, 09 Dec 2025 02:37:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:38:55 +0000   Tue, 09 Dec 2025 02:37:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:38:55 +0000   Tue, 09 Dec 2025 02:37:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:38:55 +0000   Tue, 09 Dec 2025 02:37:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-485234
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a319405cfd57de33e526a986936974c
	  System UUID:                e57d68b0-a212-4022-b9d5-5572cf2bedcf
	  Boot ID:                    64944cad-58a6-4afe-8ab0-bc86144efeee
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-sk4dm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-485234                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-m72mz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-485234             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-485234    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-ldzjl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-485234             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dttsr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qgrpj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node embed-certs-485234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node embed-certs-485234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node embed-certs-485234 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node embed-certs-485234 event: Registered Node embed-certs-485234 in Controller
	  Normal  NodeReady                95s                kubelet          Node embed-certs-485234 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node embed-certs-485234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node embed-certs-485234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node embed-certs-485234 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node embed-certs-485234 event: Registered Node embed-certs-485234 in Controller
	
	
	==> dmesg <==
	[  +0.089535] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029750] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.044351] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 9 01:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.022889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023915] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +2.047784] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[  +4.031617] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[Dec 9 01:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +16.382316] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	[ +32.252710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 46 cc 53 8c ac 06 8a a5 d6 5d 26 16 08 00
	
	
	==> etcd [9a18851e4fed459b0910fdd3ea91834db962f9676a200db349876cbe34a7a2dc] <==
	{"level":"warn","ts":"2025-12-09T02:38:23.703460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.710133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.716984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.724797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.731314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.738680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.746258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.753862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.768790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.776118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.788911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.795246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.802786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.810664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.818736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.826184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.833690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.839946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.846458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.852980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.872122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.880412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.888575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:38:23.943504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53528","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T02:38:38.890317Z","caller":"traceutil/trace.go:172","msg":"trace[150456985] transaction","detail":"{read_only:false; response_revision:596; number_of_response:1; }","duration":"133.880872ms","start":"2025-12-09T02:38:38.756416Z","end":"2025-12-09T02:38:38.890297Z","steps":["trace[150456985] 'process raft request'  (duration: 104.67944ms)","trace[150456985] 'compare'  (duration: 29.109567ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:39:17 up  1:21,  0 user,  load average: 5.29, 3.50, 2.30
	Linux embed-certs-485234 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [61b6510c4ee0e600ee2d8713affc5230566f95c3c62e347aabb29817104c56a8] <==
	I1209 02:38:25.976625       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 02:38:25.976939       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1209 02:38:25.977148       1 main.go:148] setting mtu 1500 for CNI 
	I1209 02:38:25.977175       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 02:38:25.977196       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T02:38:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 02:38:26.258444       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 02:38:26.258467       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 02:38:26.258479       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 02:38:26.258757       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 02:38:26.559709       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 02:38:26.559751       1 metrics.go:72] Registering metrics
	I1209 02:38:26.559839       1 controller.go:711] "Syncing nftables rules"
	I1209 02:38:36.259128       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:38:36.259203       1 main.go:301] handling current node
	I1209 02:38:46.260808       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:38:46.260837       1 main.go:301] handling current node
	I1209 02:38:56.259333       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:38:56.259379       1 main.go:301] handling current node
	I1209 02:39:06.264704       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:39:06.264750       1 main.go:301] handling current node
	I1209 02:39:16.261183       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1209 02:39:16.261219       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c005019871649c13a8dc79cc3b49d854c135ac71f085513bec085b210e679265] <==
	I1209 02:38:24.552398       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1209 02:38:24.552872       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1209 02:38:24.552905       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 02:38:24.556655       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 02:38:24.556676       1 aggregator.go:171] initial CRD sync complete...
	I1209 02:38:24.556683       1 autoregister_controller.go:144] Starting autoregister controller
	I1209 02:38:24.556688       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 02:38:24.556694       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:38:24.554113       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 02:38:24.557240       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 02:38:24.554254       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1209 02:38:24.564783       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 02:38:24.594227       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:38:24.612400       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:38:24.960917       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:38:24.986893       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:38:25.003737       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:38:25.010289       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:38:25.015862       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:38:25.046344       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.246.121"}
	I1209 02:38:25.058917       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.102.59"}
	I1209 02:38:25.446442       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:38:28.109258       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:38:28.358360       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:38:28.507718       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6bedda73910b696dcf23480b8a56d9ad573984aa03ac183ea9091d6bdc9f522e] <==
	I1209 02:38:27.879768       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1209 02:38:27.879776       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 02:38:27.882118       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1209 02:38:27.885432       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1209 02:38:27.887680       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 02:38:27.890119       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1209 02:38:27.895501       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 02:38:27.898859       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 02:38:27.901149       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1209 02:38:27.904428       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 02:38:27.904459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1209 02:38:27.904560       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:38:27.905609       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 02:38:27.905661       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1209 02:38:27.905680       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1209 02:38:27.905711       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 02:38:27.905864       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 02:38:27.907352       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1209 02:38:27.908442       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1209 02:38:27.910693       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 02:38:27.910839       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:38:27.913002       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1209 02:38:27.919285       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1209 02:38:27.921522       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 02:38:27.935852       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bdcdd909963275234ca5ab86ece711497b2c83edef0f3bf455c0278f574ab64e] <==
	I1209 02:38:25.812657       1 server_linux.go:53] "Using iptables proxy"
	I1209 02:38:25.884587       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:38:25.985233       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:38:25.985273       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1209 02:38:25.985376       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:38:26.010821       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 02:38:26.010883       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:38:26.017127       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:38:26.017571       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:38:26.017972       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:38:26.021432       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:38:26.021461       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:38:26.022136       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:38:26.024084       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:38:26.022733       1 config.go:200] "Starting service config controller"
	I1209 02:38:26.024123       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:38:26.024070       1 config.go:309] "Starting node config controller"
	I1209 02:38:26.024148       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:38:26.024154       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:38:26.121672       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:38:26.124349       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:38:26.124360       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a25f764bedd8c070035e47208797683eec3e7707b255c4203f6216099003061b] <==
	I1209 02:38:23.504220       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:38:24.495271       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:38:24.495781       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:38:24.496168       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:38:24.496332       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:38:24.524392       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 02:38:24.524484       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:38:24.539762       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:38:24.539851       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:38:24.542097       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:38:24.542188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:38:24.640015       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 02:38:28 embed-certs-485234 kubelet[733]: I1209 02:38:28.540121     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsgq6\" (UniqueName: \"kubernetes.io/projected/203ce5c0-481b-4ec6-afe4-db17c646a2ae-kube-api-access-zsgq6\") pod \"kubernetes-dashboard-855c9754f9-qgrpj\" (UID: \"203ce5c0-481b-4ec6-afe4-db17c646a2ae\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qgrpj"
	Dec 09 02:38:28 embed-certs-485234 kubelet[733]: I1209 02:38:28.540239     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5648\" (UniqueName: \"kubernetes.io/projected/8782590a-10e3-436f-8acc-0e0f4c95c53b-kube-api-access-h5648\") pod \"dashboard-metrics-scraper-6ffb444bf9-dttsr\" (UID: \"8782590a-10e3-436f-8acc-0e0f4c95c53b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr"
	Dec 09 02:38:28 embed-certs-485234 kubelet[733]: I1209 02:38:28.540299     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8782590a-10e3-436f-8acc-0e0f4c95c53b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-dttsr\" (UID: \"8782590a-10e3-436f-8acc-0e0f4c95c53b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr"
	Dec 09 02:38:31 embed-certs-485234 kubelet[733]: I1209 02:38:31.453087     733 scope.go:117] "RemoveContainer" containerID="a0da432a0df376772dd4239d098b418cef1af83e5f9a275153cbd3403ed839e0"
	Dec 09 02:38:32 embed-certs-485234 kubelet[733]: I1209 02:38:32.457578     733 scope.go:117] "RemoveContainer" containerID="a0da432a0df376772dd4239d098b418cef1af83e5f9a275153cbd3403ed839e0"
	Dec 09 02:38:32 embed-certs-485234 kubelet[733]: I1209 02:38:32.457966     733 scope.go:117] "RemoveContainer" containerID="71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577"
	Dec 09 02:38:32 embed-certs-485234 kubelet[733]: E1209 02:38:32.458138     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:38:33 embed-certs-485234 kubelet[733]: I1209 02:38:33.461970     733 scope.go:117] "RemoveContainer" containerID="71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577"
	Dec 09 02:38:33 embed-certs-485234 kubelet[733]: E1209 02:38:33.462148     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:38:34 embed-certs-485234 kubelet[733]: I1209 02:38:34.465122     733 scope.go:117] "RemoveContainer" containerID="71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577"
	Dec 09 02:38:34 embed-certs-485234 kubelet[733]: E1209 02:38:34.465364     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:38:35 embed-certs-485234 kubelet[733]: I1209 02:38:35.485204     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qgrpj" podStartSLOduration=1.362615845 podStartE2EDuration="7.485178338s" podCreationTimestamp="2025-12-09 02:38:28 +0000 UTC" firstStartedPulling="2025-12-09 02:38:28.822697325 +0000 UTC m=+6.517920206" lastFinishedPulling="2025-12-09 02:38:34.945259821 +0000 UTC m=+12.640482699" observedRunningTime="2025-12-09 02:38:35.484518742 +0000 UTC m=+13.179741651" watchObservedRunningTime="2025-12-09 02:38:35.485178338 +0000 UTC m=+13.180401238"
	Dec 09 02:38:49 embed-certs-485234 kubelet[733]: I1209 02:38:49.392408     733 scope.go:117] "RemoveContainer" containerID="71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577"
	Dec 09 02:38:49 embed-certs-485234 kubelet[733]: I1209 02:38:49.508292     733 scope.go:117] "RemoveContainer" containerID="71493b2ca7fb22c2aa64a4498c01a5baaf6f47d438d9e65ceaae945bc7d51577"
	Dec 09 02:38:49 embed-certs-485234 kubelet[733]: I1209 02:38:49.508555     733 scope.go:117] "RemoveContainer" containerID="c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0"
	Dec 09 02:38:49 embed-certs-485234 kubelet[733]: E1209 02:38:49.508764     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:38:52 embed-certs-485234 kubelet[733]: I1209 02:38:52.939689     733 scope.go:117] "RemoveContainer" containerID="c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0"
	Dec 09 02:38:52 embed-certs-485234 kubelet[733]: E1209 02:38:52.939933     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:38:56 embed-certs-485234 kubelet[733]: I1209 02:38:56.530838     733 scope.go:117] "RemoveContainer" containerID="c623235e887143b6c59c75b4efff2a8935ff6e87604fbf00895ae925bf1ea296"
	Dec 09 02:39:08 embed-certs-485234 kubelet[733]: I1209 02:39:08.393294     733 scope.go:117] "RemoveContainer" containerID="c0c0884e326a46def1f0fbad0660689d7caa58668ab931c097bf0055749f70b0"
	Dec 09 02:39:08 embed-certs-485234 kubelet[733]: E1209 02:39:08.394045     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttsr_kubernetes-dashboard(8782590a-10e3-436f-8acc-0e0f4c95c53b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttsr" podUID="8782590a-10e3-436f-8acc-0e0f4c95c53b"
	Dec 09 02:39:12 embed-certs-485234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 02:39:12 embed-certs-485234 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 02:39:12 embed-certs-485234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 02:39:12 embed-certs-485234 systemd[1]: kubelet.service: Consumed 1.598s CPU time.
	
	
	==> kubernetes-dashboard [55fee4ca21f23f4f6a1737ed43fa72fae1199f3a8dee15cbb2ccf0b489ae0266] <==
	2025/12/09 02:38:35 Using namespace: kubernetes-dashboard
	2025/12/09 02:38:35 Using in-cluster config to connect to apiserver
	2025/12/09 02:38:35 Using secret token for csrf signing
	2025/12/09 02:38:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/09 02:38:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/09 02:38:35 Successful initial request to the apiserver, version: v1.34.2
	2025/12/09 02:38:35 Generating JWE encryption key
	2025/12/09 02:38:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/09 02:38:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/09 02:38:35 Initializing JWE encryption key from synchronized object
	2025/12/09 02:38:35 Creating in-cluster Sidecar client
	2025/12/09 02:38:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:38:35 Serving insecurely on HTTP port: 9090
	2025/12/09 02:39:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/09 02:38:35 Starting overwatch
	
	
	==> storage-provisioner [61185cadc62b35fbd5d09eb4f2045e002615bf03c5ce52541b5c6bbe3e361145] <==
	I1209 02:38:56.586352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:38:56.593781       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:38:56.593877       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:38:56.596776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:00.052352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:04.312687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:07.911386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:11.048772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:14.071302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:14.076923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:39:14.077086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:39:14.077224       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5aeea5d1-f9d5-472a-8ee4-5bcc362f6ec9", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-485234_0bcb9ab9-5a79-4b9d-ba64-5335bc767f77 became leader
	I1209 02:39:14.077268       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-485234_0bcb9ab9-5a79-4b9d-ba64-5335bc767f77!
	W1209 02:39:14.079400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:14.084601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:39:14.177840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-485234_0bcb9ab9-5a79-4b9d-ba64-5335bc767f77!
	W1209 02:39:16.088350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:39:16.092715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c623235e887143b6c59c75b4efff2a8935ff6e87604fbf00895ae925bf1ea296] <==
	I1209 02:38:25.772239       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:38:55.774804       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-485234 -n embed-certs-485234
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-485234 -n embed-certs-485234: exit status 2 (324.194134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-485234 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.19s)

                                                
                                    

Test pass (354/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.39
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 3.15
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.04
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.13
29 TestDownloadOnlyKic 0.38
30 TestBinaryMirror 0.79
31 TestOffline 62.44
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 92.98
40 TestAddons/serial/GCPAuth/Namespaces 0.16
41 TestAddons/serial/GCPAuth/FakeCredentials 7.41
57 TestAddons/StoppedEnableDisable 18.49
58 TestCertOptions 30.6
59 TestCertExpiration 211.6
61 TestForceSystemdFlag 25.13
62 TestForceSystemdEnv 24.2
67 TestErrorSpam/setup 23.18
68 TestErrorSpam/start 0.63
69 TestErrorSpam/status 0.89
70 TestErrorSpam/pause 6.1
71 TestErrorSpam/unpause 5.86
72 TestErrorSpam/stop 12.56
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 39.85
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 5.85
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.87
84 TestFunctional/serial/CacheCmd/cache/add_local 1.24
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.45
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 47.14
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.11
95 TestFunctional/serial/LogsFileCmd 1.14
96 TestFunctional/serial/InvalidService 3.69
98 TestFunctional/parallel/ConfigCmd 0.43
99 TestFunctional/parallel/DashboardCmd 7.13
100 TestFunctional/parallel/DryRun 0.41
101 TestFunctional/parallel/InternationalLanguage 0.16
102 TestFunctional/parallel/StatusCmd 1.03
106 TestFunctional/parallel/ServiceCmdConnect 8.69
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 24.71
110 TestFunctional/parallel/SSHCmd 0.75
111 TestFunctional/parallel/CpCmd 1.9
112 TestFunctional/parallel/MySQL 24.36
113 TestFunctional/parallel/FileSync 0.26
114 TestFunctional/parallel/CertSync 1.83
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
122 TestFunctional/parallel/License 0.48
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
126 TestFunctional/parallel/Version/short 0.07
127 TestFunctional/parallel/Version/components 0.55
128 TestFunctional/parallel/ServiceCmd/DeployApp 15.16
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.24
134 TestFunctional/parallel/ServiceCmd/List 0.49
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
143 TestFunctional/parallel/ServiceCmd/Format 0.52
144 TestFunctional/parallel/ServiceCmd/URL 0.52
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.26
150 TestFunctional/parallel/ImageCommands/Setup 1.09
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.05
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
155 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
156 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.57
157 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
158 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
159 TestFunctional/parallel/MountCmd/any-port 10
160 TestFunctional/parallel/ProfileCmd/profile_list 0.46
161 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
162 TestFunctional/parallel/MountCmd/specific-port 1.56
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 66.95
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 5.91
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.59
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.17
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.28
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.48
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 43.4
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.13
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.14
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 3.9
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.42
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 8.27
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.39
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.17
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.91
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 12.89
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.18
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 21.28
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.54
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.73
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 25.1
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.59
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.57
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.45
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 8.17
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.45
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.9
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 7.78
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.39
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.11
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.86
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 8.19
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.84
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.48
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.56
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.39
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.39
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.38
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.33
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.35
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 5.81
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.32
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.39
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.35
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.67
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.96
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.32
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.19
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.19
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 146.4
266 TestMultiControlPlane/serial/DeployApp 4.03
267 TestMultiControlPlane/serial/PingHostFromPods 1
268 TestMultiControlPlane/serial/AddWorkerNode 53.36
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
271 TestMultiControlPlane/serial/CopyFile 16.23
272 TestMultiControlPlane/serial/StopSecondaryNode 18.71
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
274 TestMultiControlPlane/serial/RestartSecondaryNode 14.82
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 108.76
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.43
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
279 TestMultiControlPlane/serial/StopCluster 43.86
280 TestMultiControlPlane/serial/RestartCluster 55.28
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
282 TestMultiControlPlane/serial/AddSecondaryNode 41.66
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
288 TestJSONOutput/start/Command 37.95
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.04
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.22
313 TestKicCustomNetwork/create_custom_network 29.79
314 TestKicCustomNetwork/use_default_bridge_network 21.01
315 TestKicExistingNetwork 23.25
316 TestKicCustomSubnet 23.6
317 TestKicStaticIP 25.7
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 44.93
322 TestMountStart/serial/StartWithMountFirst 4.48
323 TestMountStart/serial/VerifyMountFirst 0.26
324 TestMountStart/serial/StartWithMountSecond 7.49
325 TestMountStart/serial/VerifyMountSecond 0.25
326 TestMountStart/serial/DeleteFirst 1.66
327 TestMountStart/serial/VerifyMountPostDelete 0.25
328 TestMountStart/serial/Stop 1.24
329 TestMountStart/serial/RestartStopped 7.12
330 TestMountStart/serial/VerifyMountPostStop 0.26
333 TestMultiNode/serial/FreshStart2Nodes 93.2
334 TestMultiNode/serial/DeployApp2Nodes 3.25
335 TestMultiNode/serial/PingHostFrom2Pods 0.7
336 TestMultiNode/serial/AddNode 55.64
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.61
339 TestMultiNode/serial/CopyFile 9.3
340 TestMultiNode/serial/StopNode 2.19
341 TestMultiNode/serial/StartAfterStop 6.95
342 TestMultiNode/serial/RestartKeepsNodes 79.38
343 TestMultiNode/serial/DeleteNode 5.14
344 TestMultiNode/serial/StopMultiNode 30.22
345 TestMultiNode/serial/RestartMultiNode 46.91
346 TestMultiNode/serial/ValidateNameConflict 21.91
351 TestPreload 106.47
353 TestScheduledStopUnix 93.93
356 TestInsufficientStorage 8.58
357 TestRunningBinaryUpgrade 294.4
359 TestKubernetesUpgrade 295.28
360 TestMissingContainerUpgrade 89.08
362 TestStoppedBinaryUpgrade/Setup 0.72
363 TestPause/serial/Start 57.09
364 TestStoppedBinaryUpgrade/Upgrade 304.44
365 TestPause/serial/SecondStartNoReconfiguration 5.58
375 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
376 TestNoKubernetes/serial/StartWithK8s 19.37
377 TestNoKubernetes/serial/StartWithStopK8s 22.86
378 TestNoKubernetes/serial/Start 6.98
379 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
380 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
381 TestNoKubernetes/serial/ProfileList 30.42
382 TestNoKubernetes/serial/Stop 1.27
383 TestNoKubernetes/serial/StartNoArgs 6.2
384 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
392 TestNetworkPlugins/group/false 3.28
396 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
398 TestStartStop/group/old-k8s-version/serial/FirstStart 50.42
400 TestStartStop/group/no-preload/serial/FirstStart 49.83
402 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.51
403 TestStartStop/group/old-k8s-version/serial/DeployApp 7.29
404 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.22
405 TestStartStop/group/no-preload/serial/DeployApp 8.21
408 TestStartStop/group/old-k8s-version/serial/Stop 16.06
409 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.11
412 TestStartStop/group/newest-cni/serial/FirstStart 21.28
413 TestStartStop/group/no-preload/serial/Stop 18.21
414 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
415 TestStartStop/group/old-k8s-version/serial/SecondStart 46.27
416 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
417 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.41
418 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
419 TestStartStop/group/no-preload/serial/SecondStart 46.72
420 TestStartStop/group/newest-cni/serial/DeployApp 0
422 TestStartStop/group/newest-cni/serial/Stop 14.9
423 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
424 TestStartStop/group/newest-cni/serial/SecondStart 10.11
425 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
426 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
427 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
429 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
431 TestStartStop/group/embed-certs/serial/FirstStart 39.41
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
434 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
435 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
436 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
438 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
440 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
441 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
442 TestNetworkPlugins/group/auto/Start 42.73
444 TestNetworkPlugins/group/kindnet/Start 41.05
445 TestNetworkPlugins/group/calico/Start 51.64
446 TestStartStop/group/embed-certs/serial/DeployApp 7.26
448 TestStartStop/group/embed-certs/serial/Stop 19.67
449 TestNetworkPlugins/group/auto/KubeletFlags 0.28
450 TestNetworkPlugins/group/auto/NetCatPod 9.18
451 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
452 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
453 TestStartStop/group/embed-certs/serial/SecondStart 44.85
454 TestNetworkPlugins/group/auto/DNS 0.1
455 TestNetworkPlugins/group/auto/Localhost 0.08
456 TestNetworkPlugins/group/auto/HairPin 0.08
457 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
458 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
459 TestNetworkPlugins/group/kindnet/DNS 0.12
460 TestNetworkPlugins/group/kindnet/Localhost 0.1
461 TestNetworkPlugins/group/kindnet/HairPin 0.1
462 TestNetworkPlugins/group/calico/ControllerPod 6.01
463 TestNetworkPlugins/group/calico/KubeletFlags 0.3
464 TestNetworkPlugins/group/calico/NetCatPod 9.22
465 TestNetworkPlugins/group/custom-flannel/Start 47.3
466 TestNetworkPlugins/group/calico/DNS 0.12
467 TestNetworkPlugins/group/calico/Localhost 0.11
468 TestNetworkPlugins/group/calico/HairPin 0.1
469 TestNetworkPlugins/group/enable-default-cni/Start 71.3
470 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
471 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
472 TestNetworkPlugins/group/flannel/Start 51.05
473 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.49
475 TestNetworkPlugins/group/bridge/Start 32.49
476 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
477 TestNetworkPlugins/group/custom-flannel/NetCatPod 7.23
478 TestNetworkPlugins/group/custom-flannel/DNS 0.1
479 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
480 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
481 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
482 TestNetworkPlugins/group/bridge/NetCatPod 8.19
483 TestNetworkPlugins/group/flannel/ControllerPod 6.01
484 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
485 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.17
486 TestNetworkPlugins/group/bridge/DNS 0.14
487 TestNetworkPlugins/group/bridge/Localhost 0.11
488 TestNetworkPlugins/group/bridge/HairPin 0.11
489 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
490 TestNetworkPlugins/group/flannel/NetCatPod 8.16
491 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
492 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
493 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
494 TestNetworkPlugins/group/flannel/DNS 0.12
495 TestNetworkPlugins/group/flannel/Localhost 0.09
496 TestNetworkPlugins/group/flannel/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (4.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-983180 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-983180 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.391869347s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1209 01:55:35.934283   14552 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1209 01:55:35.934360   14552 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-983180
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-983180: exit status 85 (66.110307ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-983180 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-983180 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:31.593244   14564 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:31.593338   14564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:31.593346   14564 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:31.593350   14564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:31.593541   14564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	W1209 01:55:31.593651   14564 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22081-11001/.minikube/config/config.json: open /home/jenkins/minikube-integration/22081-11001/.minikube/config/config.json: no such file or directory
	I1209 01:55:31.594089   14564 out.go:368] Setting JSON to true
	I1209 01:55:31.594903   14564 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2281,"bootTime":1765243051,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:31.594953   14564 start.go:143] virtualization: kvm guest
	I1209 01:55:31.599292   14564 out.go:99] [download-only-983180] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1209 01:55:31.599427   14564 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 01:55:31.599453   14564 notify.go:221] Checking for updates...
	I1209 01:55:31.600692   14564 out.go:171] MINIKUBE_LOCATION=22081
	I1209 01:55:31.601968   14564 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:31.603118   14564 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 01:55:31.604097   14564 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 01:55:31.605142   14564 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 01:55:31.607080   14564 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 01:55:31.607310   14564 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 01:55:31.629503   14564 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 01:55:31.629571   14564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 01:55:31.866473   14564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-09 01:55:31.857799386 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 01:55:31.866567   14564 docker.go:319] overlay module found
	I1209 01:55:31.868084   14564 out.go:99] Using the docker driver based on user configuration
	I1209 01:55:31.868105   14564 start.go:309] selected driver: docker
	I1209 01:55:31.868110   14564 start.go:927] validating driver "docker" against <nil>
	I1209 01:55:31.868188   14564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 01:55:31.921284   14564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-09 01:55:31.912747895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 01:55:31.921471   14564 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 01:55:31.922038   14564 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1209 01:55:31.922183   14564 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 01:55:31.923727   14564 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-983180 host does not exist
	  To start a cluster, run: "minikube start -p download-only-983180"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-983180
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-261314 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-261314 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.151528873s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1209 01:55:39.502267   14552 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1209 01:55:39.502310   14552 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-261314
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-261314: exit status 85 (68.703669ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-983180 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-983180 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-983180                                                                                                                                                   │ download-only-983180 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ -o=json --download-only -p download-only-261314 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-261314 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:36.399362   14920 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:36.399578   14920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:36.399586   14920 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:36.399590   14920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:36.399796   14920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:55:36.400199   14920 out.go:368] Setting JSON to true
	I1209 01:55:36.401009   14920 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2285,"bootTime":1765243051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:36.401057   14920 start.go:143] virtualization: kvm guest
	I1209 01:55:36.402810   14920 out.go:99] [download-only-261314] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 01:55:36.402916   14920 notify.go:221] Checking for updates...
	I1209 01:55:36.404744   14920 out.go:171] MINIKUBE_LOCATION=22081
	I1209 01:55:36.405998   14920 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:36.407166   14920 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 01:55:36.408254   14920 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 01:55:36.409322   14920 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 01:55:36.411273   14920 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 01:55:36.411436   14920 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 01:55:36.432714   14920 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 01:55:36.432817   14920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 01:55:36.485113   14920 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-09 01:55:36.476287244 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 01:55:36.485249   14920 docker.go:319] overlay module found
	I1209 01:55:36.486626   14920 out.go:99] Using the docker driver based on user configuration
	I1209 01:55:36.486662   14920 start.go:309] selected driver: docker
	I1209 01:55:36.486668   14920 start.go:927] validating driver "docker" against <nil>
	I1209 01:55:36.486736   14920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 01:55:36.540282   14920 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-09 01:55:36.53102176 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 01:55:36.540489   14920 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 01:55:36.541157   14920 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1209 01:55:36.541343   14920 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 01:55:36.542930   14920 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-261314 host does not exist
	  To start a cluster, run: "minikube start -p download-only-261314"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-261314
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-303316 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-303316 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.037192789s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1209 01:55:42.970868   14552 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1209 01:55:42.970907   14552 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-303316
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-303316: exit status 85 (68.051535ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-983180 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-983180 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-983180                                                                                                                                                          │ download-only-983180 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ -o=json --download-only -p download-only-261314 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-261314 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-261314                                                                                                                                                          │ download-only-261314 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ -o=json --download-only -p download-only-303316 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-303316 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:39.984567   15284 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:39.985233   15284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:39.985244   15284 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:39.985248   15284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:39.985412   15284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 01:55:39.985842   15284 out.go:368] Setting JSON to true
	I1209 01:55:39.986563   15284 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2289,"bootTime":1765243051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:39.986613   15284 start.go:143] virtualization: kvm guest
	I1209 01:55:39.988372   15284 out.go:99] [download-only-303316] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 01:55:39.988498   15284 notify.go:221] Checking for updates...
	I1209 01:55:39.989818   15284 out.go:171] MINIKUBE_LOCATION=22081
	I1209 01:55:39.991143   15284 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:39.992198   15284 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 01:55:39.993203   15284 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 01:55:39.994250   15284 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 01:55:39.996294   15284 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 01:55:39.996482   15284 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 01:55:40.018504   15284 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 01:55:40.018600   15284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 01:55:40.073280   15284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-09 01:55:40.064067293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 01:55:40.073371   15284 docker.go:319] overlay module found
	I1209 01:55:40.074713   15284 out.go:99] Using the docker driver based on user configuration
	I1209 01:55:40.074738   15284 start.go:309] selected driver: docker
	I1209 01:55:40.074744   15284 start.go:927] validating driver "docker" against <nil>
	I1209 01:55:40.074811   15284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 01:55:40.127062   15284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-09 01:55:40.117767035 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 01:55:40.127220   15284 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 01:55:40.127699   15284 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1209 01:55:40.127840   15284 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 01:55:40.129278   15284 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-303316 host does not exist
	  To start a cluster, run: "minikube start -p download-only-303316"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-303316
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-666539 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-666539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-666539
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 01:55:44.169604   14552 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-013418 --alsologtostderr --binary-mirror http://127.0.0.1:35749 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-013418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-013418
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (62.44s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-654778 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-654778 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m0.084176302s)
helpers_test.go:175: Cleaning up "offline-crio-654778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-654778
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-654778: (2.358099323s)
--- PASS: TestOffline (62.44s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1060: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-598284
addons_test.go:1060: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-598284: exit status 85 (60.12346ms)

                                                
                                                
-- stdout --
	* Profile "addons-598284" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-598284"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1071: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-598284
addons_test.go:1071: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-598284: exit status 85 (61.355139ms)

                                                
                                                
-- stdout --
	* Profile "addons-598284" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-598284"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (92.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p addons-598284 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p addons-598284 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m32.982041547s)
--- PASS: TestAddons/Setup (92.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:690: (dbg) Run:  kubectl --context addons-598284 create ns new-namespace
addons_test.go:704: (dbg) Run:  kubectl --context addons-598284 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:735: (dbg) Run:  kubectl --context addons-598284 create -f testdata/busybox.yaml
addons_test.go:742: (dbg) Run:  kubectl --context addons-598284 create sa gcp-auth-test
addons_test.go:748: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8bcca42a-9659-4f22-93da-60add32ec4b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8bcca42a-9659-4f22-93da-60add32ec4b4] Running
addons_test.go:748: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003168953s
addons_test.go:754: (dbg) Run:  kubectl --context addons-598284 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:766: (dbg) Run:  kubectl --context addons-598284 describe sa gcp-auth-test
addons_test.go:804: (dbg) Run:  kubectl --context addons-598284 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.49s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:177: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-598284
addons_test.go:177: (dbg) Done: out/minikube-linux-amd64 stop -p addons-598284: (18.217787513s)
addons_test.go:181: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-598284
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-598284
addons_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-598284
--- PASS: TestAddons/StoppedEnableDisable (18.49s)

                                                
                                    
x
+
TestCertOptions (30.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-465214 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-465214 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.31178421s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-465214 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-465214 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-465214 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-465214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-465214
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-465214: (4.582323911s)
--- PASS: TestCertOptions (30.60s)

                                                
                                    
x
+
TestCertExpiration (211.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-572052 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-572052 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.091923565s)
E1209 02:33:06.552120   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-572052 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-572052 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.06001294s)
helpers_test.go:175: Cleaning up "cert-expiration-572052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-572052
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-572052: (2.449834686s)
--- PASS: TestCertExpiration (211.60s)

                                                
                                    
x
+
TestForceSystemdFlag (25.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-598501 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-598501 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.470323236s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-598501 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-598501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-598501
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-598501: (2.37273567s)
--- PASS: TestForceSystemdFlag (25.13s)

                                                
                                    
x
+
TestForceSystemdEnv (24.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-496811 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-496811 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.792814072s)
helpers_test.go:175: Cleaning up "force-systemd-env-496811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-496811
E1209 02:30:56.852030   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-496811: (2.405609169s)
--- PASS: TestForceSystemdEnv (24.20s)

                                                
                                    
x
+
TestErrorSpam/setup (23.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-269085 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-269085 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-269085 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-269085 --driver=docker  --container-runtime=crio: (23.180232959s)
--- PASS: TestErrorSpam/setup (23.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (6.1s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 pause: exit status 80 (2.333657029s)

                                                
                                                
-- stdout --
	* Pausing node nospam-269085 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:00:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 pause: exit status 80 (2.020467273s)

                                                
                                                
-- stdout --
	* Pausing node nospam-269085 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:00:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 pause: exit status 80 (1.747621694s)

                                                
                                                
-- stdout --
	* Pausing node nospam-269085 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:00:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.10s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 unpause: exit status 80 (2.044082511s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-269085 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:01:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 unpause: exit status 80 (2.24311516s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-269085 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:01:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 unpause: exit status 80 (1.569922419s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-269085 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T02:01:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.86s)

                                                
                                    
x
+
TestErrorSpam/stop (12.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 stop: (12.369295599s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-269085 --log_dir /tmp/nospam-269085 stop
--- PASS: TestErrorSpam/stop (12.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/test/nested/copy/14552/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-976894 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-976894 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.854247817s)
--- PASS: TestFunctional/serial/StartWithProxy (39.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 02:02:00.949908   14552 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-976894 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-976894 --alsologtostderr -v=8: (5.846490636s)
functional_test.go:678: soft start took 5.84723708s for "functional-976894" cluster.
I1209 02:02:06.796826   14552 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (5.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-976894 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-976894 cache add registry.k8s.io/pause:3.3: (1.005141492s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-976894 /tmp/TestFunctionalserialCacheCmdcacheadd_local712170302/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 cache add minikube-local-cache-test:functional-976894
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 cache delete minikube-local-cache-test:functional-976894
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-976894
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-976894 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (266.822412ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 kubectl -- --context functional-976894 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-976894 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-976894 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1209 02:02:18.696154   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:18.702529   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:18.713847   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:18.735158   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:18.776472   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:18.857849   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:19.019338   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:19.340992   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:19.983005   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:21.264577   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:23.827422   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:28.948698   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:39.190770   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:02:59.672708   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-976894 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.137216262s)
functional_test.go:776: restart took 47.137329494s for "functional-976894" cluster.
I1209 02:03:00.347625   14552 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (47.14s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-976894 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-976894 logs: (1.113694074s)
--- PASS: TestFunctional/serial/LogsCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 logs --file /tmp/TestFunctionalserialLogsFileCmd3289667612/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-976894 logs --file /tmp/TestFunctionalserialLogsFileCmd3289667612/001/logs.txt: (1.137826606s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.69s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-976894 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-976894
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-976894: exit status 115 (321.630777ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31222 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-976894 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.69s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-976894 config get cpus: exit status 14 (77.463919ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-976894 config get cpus: exit status 14 (63.539041ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-976894 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-976894 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 53575: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-976894 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-976894 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.987151ms)

                                                
                                                
-- stdout --
	* [functional-976894] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:03:31.806568   52165 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:03:31.806841   52165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:03:31.806850   52165 out.go:374] Setting ErrFile to fd 2...
	I1209 02:03:31.806854   52165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:03:31.807128   52165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:03:31.807560   52165 out.go:368] Setting JSON to false
	I1209 02:03:31.808625   52165 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2761,"bootTime":1765243051,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:03:31.808706   52165 start.go:143] virtualization: kvm guest
	I1209 02:03:31.810381   52165 out.go:179] * [functional-976894] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:03:31.811783   52165 notify.go:221] Checking for updates...
	I1209 02:03:31.811815   52165 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:03:31.813167   52165 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:03:31.814417   52165 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:03:31.815532   52165 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:03:31.816618   52165 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:03:31.817717   52165 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:03:31.819126   52165 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:03:31.819620   52165 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:03:31.844313   52165 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:03:31.844446   52165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:03:31.908153   52165 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-09 02:03:31.896596327 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:03:31.908294   52165 docker.go:319] overlay module found
	I1209 02:03:31.910143   52165 out.go:179] * Using the docker driver based on existing profile
	I1209 02:03:31.911550   52165 start.go:309] selected driver: docker
	I1209 02:03:31.911566   52165 start.go:927] validating driver "docker" against &{Name:functional-976894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-976894 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:03:31.911688   52165 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:03:31.913461   52165 out.go:203] 
	W1209 02:03:31.914653   52165 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 02:03:31.915682   52165 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-976894 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-976894 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-976894 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (160.128048ms)

                                                
                                                
-- stdout --
	* [functional-976894] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:03:32.207428   52549 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:03:32.207538   52549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:03:32.207548   52549 out.go:374] Setting ErrFile to fd 2...
	I1209 02:03:32.207552   52549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:03:32.207863   52549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:03:32.208273   52549 out.go:368] Setting JSON to false
	I1209 02:03:32.209191   52549 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2761,"bootTime":1765243051,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:03:32.209241   52549 start.go:143] virtualization: kvm guest
	I1209 02:03:32.211144   52549 out.go:179] * [functional-976894] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1209 02:03:32.212335   52549 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:03:32.212334   52549 notify.go:221] Checking for updates...
	I1209 02:03:32.214742   52549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:03:32.215818   52549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:03:32.216832   52549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:03:32.217985   52549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:03:32.219204   52549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:03:32.220658   52549 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:03:32.221163   52549 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:03:32.243551   52549 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:03:32.243706   52549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:03:32.299821   52549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-09 02:03:32.290159568 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:03:32.300007   52549 docker.go:319] overlay module found
	I1209 02:03:32.301671   52549 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1209 02:03:32.302825   52549 start.go:309] selected driver: docker
	I1209 02:03:32.302841   52549 start.go:927] validating driver "docker" against &{Name:functional-976894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-976894 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:03:32.302947   52549 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:03:32.304845   52549 out.go:203] 
	W1209 02:03:32.306041   52549 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 02:03:32.307157   52549 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-976894 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-976894 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-kjfpd" [94d3e615-690b-45bb-97cf-4d88c52ebab2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-kjfpd" [94d3e615-690b-45bb-97cf-4d88c52ebab2] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00460875s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31612
functional_test.go:1680: http://192.168.49.2:31612: success! body:
Request served by hello-node-connect-7d85dfc575-kjfpd

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31612
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f0bd58d2-8d8f-4d78-8309-288815a926ca] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004207762s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-976894 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-976894 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-976894 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-976894 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:03:15.308694   14552 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [00d6321b-275b-49d2-be22-3fc68d86337e] Pending
helpers_test.go:352: "sp-pod" [00d6321b-275b-49d2-be22-3fc68d86337e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [00d6321b-275b-49d2-be22-3fc68d86337e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00345391s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-976894 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-976894 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-976894 delete -f testdata/storage-provisioner/pod.yaml: (1.017356791s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-976894 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d8521451-6a44-456d-ad82-fbeeecaafb79] Pending
helpers_test.go:352: "sp-pod" [d8521451-6a44-456d-ad82-fbeeecaafb79] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003475517s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-976894 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh -n functional-976894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 cp functional-976894:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2482450738/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh -n functional-976894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh -n functional-976894 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-976894 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-6bcdcbc558-lfchf" [406da692-9fee-4198-8453-54045f02b634] Pending
helpers_test.go:352: "mysql-6bcdcbc558-lfchf" [406da692-9fee-4198-8453-54045f02b634] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-6bcdcbc558-lfchf" [406da692-9fee-4198-8453-54045f02b634] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003093962s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-976894 exec mysql-6bcdcbc558-lfchf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-976894 exec mysql-6bcdcbc558-lfchf -- mysql -ppassword -e "show databases;": exit status 1 (119.84568ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:03:20.674787   14552 retry.go:31] will retry after 1.099307454s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-976894 exec mysql-6bcdcbc558-lfchf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-976894 exec mysql-6bcdcbc558-lfchf -- mysql -ppassword -e "show databases;": exit status 1 (90.362038ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:03:21.864797   14552 retry.go:31] will retry after 1.307419875s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-976894 exec mysql-6bcdcbc558-lfchf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-976894 exec mysql-6bcdcbc558-lfchf -- mysql -ppassword -e "show databases;": exit status 1 (107.981983ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:03:23.281051   14552 retry.go:31] will retry after 3.187587132s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-976894 exec mysql-6bcdcbc558-lfchf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-976894 exec mysql-6bcdcbc558-lfchf -- mysql -ppassword -e "show databases;": exit status 1 (102.518728ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:03:26.571703   14552 retry.go:31] will retry after 4.046870062s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-976894 exec mysql-6bcdcbc558-lfchf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14552/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo cat /etc/test/nested/copy/14552/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14552.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo cat /etc/ssl/certs/14552.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14552.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo cat /usr/share/ca-certificates/14552.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/145522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo cat /etc/ssl/certs/145522.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/145522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo cat /usr/share/ca-certificates/145522.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-976894 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-976894 ssh "sudo systemctl is-active docker": exit status 1 (263.282759ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-976894 ssh "sudo systemctl is-active containerd": exit status 1 (256.516818ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-976894 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-976894 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-ff8zw" [71ca3f4d-a32d-4078-a068-e0bb04d71926] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-ff8zw" [71ca3f4d-a32d-4078-a068-e0bb04d71926] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.002518261s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-976894 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-976894 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-976894 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-976894 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 48337: os: process already finished
helpers_test.go:519: unable to terminate pid 48124: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-976894 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-976894 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [653167d6-4850-4233-a6b7-2e55f7f44453] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [653167d6-4850-4233-a6b7-2e55f7f44453] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.004119156s
I1209 02:03:22.526750   14552 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 service list -o json
functional_test.go:1504: Took "508.215992ms" to run "out/minikube-linux-amd64 -p functional-976894 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-976894 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.61.62 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-976894 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30619
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30619
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-976894 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-976894
localhost/kicbase/echo-server:functional-976894
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-976894 image ls --format short --alsologtostderr:
I1209 02:03:33.562187   53634 out.go:360] Setting OutFile to fd 1 ...
I1209 02:03:33.562427   53634 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:03:33.562436   53634 out.go:374] Setting ErrFile to fd 2...
I1209 02:03:33.562440   53634 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:03:33.562649   53634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
I1209 02:03:33.563168   53634 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:03:33.563262   53634 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:03:33.563687   53634 cli_runner.go:164] Run: docker container inspect functional-976894 --format={{.State.Status}}
I1209 02:03:33.587169   53634 ssh_runner.go:195] Run: systemctl --version
I1209 02:03:33.587233   53634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-976894
I1209 02:03:33.607576   53634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/functional-976894/id_rsa Username:docker}
I1209 02:03:33.704015   53634 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-976894 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-976894  │ 6caa8e569bf95 │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-976894  │ 9056ab77afb8e │ 4.95MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-976894 image ls --format table --alsologtostderr:
I1209 02:03:34.141472   53999 out.go:360] Setting OutFile to fd 1 ...
I1209 02:03:34.141711   53999 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:03:34.141721   53999 out.go:374] Setting ErrFile to fd 2...
I1209 02:03:34.141728   53999 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:03:34.141940   53999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
I1209 02:03:34.142486   53999 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:03:34.142603   53999 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:03:34.143036   53999 cli_runner.go:164] Run: docker container inspect functional-976894 --format={{.State.Status}}
I1209 02:03:34.160076   53999 ssh_runner.go:195] Run: systemctl --version
I1209 02:03:34.160131   53999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-976894
I1209 02:03:34.177090   53999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/functional-976894/id_rsa Username:docker}
I1209 02:03:34.266907   53999 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-976894 image ls --format json --alsologtostderr:
[{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e513925
24dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54242145"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2ce
a929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea
6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-976894"],"size":"4945146"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb
7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de6
04a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"6caa8e569bf957f95a07167b47f9e9f7707039441c3ab639b7efbb0ea10b4ca0","repoDigests":["localhost/minikube-local-cache-test@sha256:457ac860687594fe6ce1a9c8ed7a8f1ccece090cbfc493aa3529b75f047e5a00"],"repoTags":["localhost/minikube-local-cache-test:functional-976894"],"size":"3330"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-976894 image ls --format json --alsologtostderr:
I1209 02:03:33.926310   53869 out.go:360] Setting OutFile to fd 1 ...
I1209 02:03:33.926406   53869 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:03:33.926414   53869 out.go:374] Setting ErrFile to fd 2...
I1209 02:03:33.926418   53869 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:03:33.926608   53869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
I1209 02:03:33.927313   53869 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:03:33.927443   53869 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:03:33.928057   53869 cli_runner.go:164] Run: docker container inspect functional-976894 --format={{.State.Status}}
I1209 02:03:33.946779   53869 ssh_runner.go:195] Run: systemctl --version
I1209 02:03:33.946841   53869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-976894
I1209 02:03:33.963346   53869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/functional-976894/id_rsa Username:docker}
I1209 02:03:34.053335   53869 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-976894 image ls --format yaml --alsologtostderr:
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-976894
size: "4945146"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6caa8e569bf957f95a07167b47f9e9f7707039441c3ab639b7efbb0ea10b4ca0
repoDigests:
- localhost/minikube-local-cache-test@sha256:457ac860687594fe6ce1a9c8ed7a8f1ccece090cbfc493aa3529b75f047e5a00
repoTags:
- localhost/minikube-local-cache-test:functional-976894
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-976894 image ls --format yaml --alsologtostderr:
I1209 02:03:33.692873   53689 out.go:360] Setting OutFile to fd 1 ...
I1209 02:03:33.692984   53689 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:03:33.692993   53689 out.go:374] Setting ErrFile to fd 2...
I1209 02:03:33.692997   53689 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:03:33.693198   53689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
I1209 02:03:33.693727   53689 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:03:33.693832   53689 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:03:33.694233   53689 cli_runner.go:164] Run: docker container inspect functional-976894 --format={{.State.Status}}
I1209 02:03:33.712236   53689 ssh_runner.go:195] Run: systemctl --version
I1209 02:03:33.712287   53689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-976894
I1209 02:03:33.730652   53689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/functional-976894/id_rsa Username:docker}
I1209 02:03:33.823495   53689 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-976894 ssh pgrep buildkitd: exit status 1 (272.356044ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image build -t localhost/my-image:functional-976894 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-976894 image build -t localhost/my-image:functional-976894 testdata/build --alsologtostderr: (1.749043915s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-976894 image build -t localhost/my-image:functional-976894 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ccacc990bd8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-976894
--> a580c8ce04a
Successfully tagged localhost/my-image:functional-976894
a580c8ce04afe61eb385a5cec3030c6e87731ae8b98cf6c6b7cd9d5281bee69e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-976894 image build -t localhost/my-image:functional-976894 testdata/build --alsologtostderr:
I1209 02:03:34.064695   53961 out.go:360] Setting OutFile to fd 1 ...
I1209 02:03:34.064993   53961 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:03:34.065003   53961 out.go:374] Setting ErrFile to fd 2...
I1209 02:03:34.065007   53961 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:03:34.065166   53961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
I1209 02:03:34.065690   53961 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:03:34.066401   53961 config.go:182] Loaded profile config "functional-976894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:03:34.066822   53961 cli_runner.go:164] Run: docker container inspect functional-976894 --format={{.State.Status}}
I1209 02:03:34.085162   53961 ssh_runner.go:195] Run: systemctl --version
I1209 02:03:34.085200   53961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-976894
I1209 02:03:34.103954   53961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/functional-976894/id_rsa Username:docker}
I1209 02:03:34.197732   53961 build_images.go:162] Building image from path: /tmp/build.1190737314.tar
I1209 02:03:34.197794   53961 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 02:03:34.205159   53961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1190737314.tar
I1209 02:03:34.208688   53961 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1190737314.tar: stat -c "%s %y" /var/lib/minikube/build/build.1190737314.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1190737314.tar': No such file or directory
I1209 02:03:34.208718   53961 ssh_runner.go:362] scp /tmp/build.1190737314.tar --> /var/lib/minikube/build/build.1190737314.tar (3072 bytes)
I1209 02:03:34.225146   53961 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1190737314
I1209 02:03:34.232085   53961 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1190737314 -xf /var/lib/minikube/build/build.1190737314.tar
I1209 02:03:34.239383   53961 crio.go:315] Building image: /var/lib/minikube/build/build.1190737314
I1209 02:03:34.239431   53961 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-976894 /var/lib/minikube/build/build.1190737314 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1209 02:03:35.732968   53961 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-976894 /var/lib/minikube/build/build.1190737314 --cgroup-manager=cgroupfs: (1.493510468s)
I1209 02:03:35.733045   53961 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1190737314
I1209 02:03:35.743459   53961 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1190737314.tar
I1209 02:03:35.752493   53961 build_images.go:218] Built localhost/my-image:functional-976894 from /tmp/build.1190737314.tar
I1209 02:03:35.752522   53961 build_images.go:134] succeeded building to: functional-976894
I1209 02:03:35.752529   53961 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image ls
2025/12/09 02:03:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.070310646s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-976894
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image load --daemon kicbase/echo-server:functional-976894 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image load --daemon kicbase/echo-server:functional-976894 --alsologtostderr
I1209 02:03:27.543369   14552 detect.go:223] nested VM detected
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-976894
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image load --daemon kicbase/echo-server:functional-976894 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image save kicbase/echo-server:functional-976894 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image rm kicbase/echo-server:functional-976894 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-976894
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 image save --daemon kicbase/echo-server:functional-976894 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-976894
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdany-port311485435/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765245811547919259" to /tmp/TestFunctionalparallelMountCmdany-port311485435/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765245811547919259" to /tmp/TestFunctionalparallelMountCmdany-port311485435/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765245811547919259" to /tmp/TestFunctionalparallelMountCmdany-port311485435/001/test-1765245811547919259
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.07844ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:03:31.868513   14552 retry.go:31] will retry after 680.831312ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 02:03 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 02:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 02:03 test-1765245811547919259
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh cat /mount-9p/test-1765245811547919259
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-976894 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [357f0a5c-b7d6-49c4-8233-f54e78eeac37] Pending
helpers_test.go:352: "busybox-mount" [357f0a5c-b7d6-49c4-8233-f54e78eeac37] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [357f0a5c-b7d6-49c4-8233-f54e78eeac37] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [357f0a5c-b7d6-49c4-8233-f54e78eeac37] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.002693624s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-976894 logs busybox-mount
E1209 02:03:40.634693   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdany-port311485435/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "389.921511ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "70.01157ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "345.732931ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.226076ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdspecific-port3250107991/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (269.421565ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:03:41.813763   14552 retry.go:31] will retry after 307.861722ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdspecific-port3250107991/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-976894 ssh "sudo umount -f /mount-9p": exit status 1 (258.788838ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-976894 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdspecific-port3250107991/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3768892513/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3768892513/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3768892513/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T" /mount1: exit status 1 (314.597347ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:03:43.420332   14552 retry.go:31] will retry after 643.767173ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-976894 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-976894 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3768892513/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3768892513/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-976894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3768892513/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-976894
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-976894
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-976894
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22081-11001/.minikube/files/etc/test/nested/copy/14552/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (66.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497139 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-497139 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m6.953451378s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (66.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (5.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1209 02:04:55.068579   14552 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497139 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-497139 --alsologtostderr -v=8: (5.910753874s)
functional_test.go:678: soft start took 5.911060728s for "functional-497139" cluster.
I1209 02:05:00.979666   14552 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (5.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-497139 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 cache add registry.k8s.io/pause:3.3
E1209 02:05:02.556571   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1013000011/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 cache add minikube-local-cache-test:functional-497139
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 cache delete minikube-local-cache-test:functional-497139
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-497139
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497139 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (269.509089ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 kubectl -- --context functional-497139 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-497139 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (43.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497139 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-497139 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.396516113s)
functional_test.go:776: restart took 43.396640895s for "functional-497139" cluster.
I1209 02:05:50.459917   14552 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (43.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-497139 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-497139 logs: (1.128086171s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1043996113/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-497139 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1043996113/001/logs.txt: (1.139483482s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-497139 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-497139
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-497139: exit status 115 (326.722451ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31810 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-497139 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497139 config get cpus: exit status 14 (75.892978ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497139 config get cpus: exit status 14 (66.290418ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (8.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-497139 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-497139 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 70448: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (8.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497139 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-497139 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (159.800142ms)

                                                
                                                
-- stdout --
	* [functional-497139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:06:06.657583   69912 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:06:06.657847   69912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:06:06.657857   69912 out.go:374] Setting ErrFile to fd 2...
	I1209 02:06:06.657861   69912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:06:06.658083   69912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:06:06.658512   69912 out.go:368] Setting JSON to false
	I1209 02:06:06.659466   69912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2916,"bootTime":1765243051,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:06:06.659515   69912 start.go:143] virtualization: kvm guest
	I1209 02:06:06.661172   69912 out.go:179] * [functional-497139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:06:06.662520   69912 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:06:06.662527   69912 notify.go:221] Checking for updates...
	I1209 02:06:06.664469   69912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:06:06.665608   69912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:06:06.666652   69912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:06:06.670788   69912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:06:06.671828   69912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:06:06.673150   69912 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:06:06.673679   69912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:06:06.696171   69912 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:06:06.696252   69912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:06:06.752105   69912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-09 02:06:06.742950372 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:06:06.752255   69912 docker.go:319] overlay module found
	I1209 02:06:06.754387   69912 out.go:179] * Using the docker driver based on existing profile
	I1209 02:06:06.755436   69912 start.go:309] selected driver: docker
	I1209 02:06:06.755448   69912 start.go:927] validating driver "docker" against &{Name:functional-497139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-497139 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:06:06.755521   69912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:06:06.757074   69912 out.go:203] 
	W1209 02:06:06.758084   69912 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 02:06:06.759404   69912 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497139 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497139 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-497139 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (169.421845ms)

                                                
                                                
-- stdout --
	* [functional-497139] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:06:05.091147   68808 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:06:05.091238   68808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:06:05.091246   68808 out.go:374] Setting ErrFile to fd 2...
	I1209 02:06:05.091250   68808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:06:05.091550   68808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:06:05.091958   68808 out.go:368] Setting JSON to false
	I1209 02:06:05.092937   68808 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2914,"bootTime":1765243051,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:06:05.092983   68808 start.go:143] virtualization: kvm guest
	I1209 02:06:05.094907   68808 out.go:179] * [functional-497139] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1209 02:06:05.096107   68808 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:06:05.096164   68808 notify.go:221] Checking for updates...
	I1209 02:06:05.098236   68808 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:06:05.099426   68808 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:06:05.101041   68808 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:06:05.102146   68808 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:06:05.103424   68808 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:06:05.105143   68808 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:06:05.105651   68808 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:06:05.129783   68808 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:06:05.129871   68808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:06:05.193625   68808 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-09 02:06:05.182462352 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:06:05.193775   68808 docker.go:319] overlay module found
	I1209 02:06:05.196730   68808 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1209 02:06:05.197804   68808 start.go:309] selected driver: docker
	I1209 02:06:05.197818   68808 start.go:927] validating driver "docker" against &{Name:functional-497139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-497139 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:06:05.197915   68808 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:06:05.199331   68808 out.go:203] 
	W1209 02:06:05.200335   68808 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 02:06:05.201284   68808 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (12.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-497139 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-497139 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-ftt2k" [9be89431-0843-453c-97ea-df73a3186c58] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-ftt2k" [9be89431-0843-453c-97ea-df73a3186c58] Running
2025/12/09 02:06:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004735559s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31699
functional_test.go:1680: http://192.168.49.2:31699: success! body:
Request served by hello-node-connect-9f67c86d4-ftt2k

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31699
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (12.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (21.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [67006a02-97b4-448b-86d8-998f75336ecd] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002857362s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-497139 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-497139 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-497139 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-497139 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:06:03.111861   14552 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2a3a835f-63ab-4f46-937d-adf2dad06ca3] Pending
helpers_test.go:352: "sp-pod" [2a3a835f-63ab-4f46-937d-adf2dad06ca3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003260846s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-497139 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-497139 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-497139 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:06:10.899171   14552 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c76d0ac2-db0d-49a7-a040-ef45b5b5454c] Pending
helpers_test.go:352: "sp-pod" [c76d0ac2-db0d-49a7-a040-ef45b5b5454c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [c76d0ac2-db0d-49a7-a040-ef45b5b5454c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004959358s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-497139 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (21.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh -n functional-497139 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 cp functional-497139:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2699749786/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh -n functional-497139 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh -n functional-497139 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (25.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-497139 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-7d7b65bc95-gsrth" [61b3ce14-7876-4ee6-9887-b5bb4c38afa5] Pending
helpers_test.go:352: "mysql-7d7b65bc95-gsrth" [61b3ce14-7876-4ee6-9887-b5bb4c38afa5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-7d7b65bc95-gsrth" [61b3ce14-7876-4ee6-9887-b5bb4c38afa5] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 16.003841638s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-497139 exec mysql-7d7b65bc95-gsrth -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-497139 exec mysql-7d7b65bc95-gsrth -- mysql -ppassword -e "show databases;": exit status 1 (88.003362ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:06:31.164627   14552 retry.go:31] will retry after 1.498192803s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-497139 exec mysql-7d7b65bc95-gsrth -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-497139 exec mysql-7d7b65bc95-gsrth -- mysql -ppassword -e "show databases;": exit status 1 (86.751974ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:06:32.750781   14552 retry.go:31] will retry after 986.155321ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-497139 exec mysql-7d7b65bc95-gsrth -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-497139 exec mysql-7d7b65bc95-gsrth -- mysql -ppassword -e "show databases;": exit status 1 (83.695022ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:06:33.821800   14552 retry.go:31] will retry after 2.073753965s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-497139 exec mysql-7d7b65bc95-gsrth -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-497139 exec mysql-7d7b65bc95-gsrth -- mysql -ppassword -e "show databases;": exit status 1 (88.403446ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:06:35.985331   14552 retry.go:31] will retry after 3.948440359s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-497139 exec mysql-7d7b65bc95-gsrth -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (25.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14552/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo cat /etc/test/nested/copy/14552/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14552.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo cat /etc/ssl/certs/14552.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14552.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo cat /usr/share/ca-certificates/14552.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/145522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo cat /etc/ssl/certs/145522.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/145522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo cat /usr/share/ca-certificates/145522.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-497139 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497139 ssh "sudo systemctl is-active docker": exit status 1 (280.87803ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497139 ssh "sudo systemctl is-active containerd": exit status 1 (293.513946ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-497139 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-497139 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-ff2z5" [512a0e83-2193-4bca-a408-47ef064464a6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-ff2z5" [512a0e83-2193-4bca-a408-47ef064464a6] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004282646s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497139 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-497139
localhost/kicbase/echo-server:functional-497139
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497139 image ls --format short --alsologtostderr:
I1209 02:06:17.416113   73409 out.go:360] Setting OutFile to fd 1 ...
I1209 02:06:17.416358   73409 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:06:17.416368   73409 out.go:374] Setting ErrFile to fd 2...
I1209 02:06:17.416372   73409 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:06:17.416560   73409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
I1209 02:06:17.417142   73409 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:06:17.417267   73409 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:06:17.417875   73409 cli_runner.go:164] Run: docker container inspect functional-497139 --format={{.State.Status}}
I1209 02:06:17.435494   73409 ssh_runner.go:195] Run: systemctl --version
I1209 02:06:17.435535   73409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-497139
I1209 02:06:17.453329   73409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/functional-497139/id_rsa Username:docker}
I1209 02:06:17.543696   73409 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497139 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-497139  │ 9056ab77afb8e │ 4.95MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-497139  │ 6caa8e569bf95 │ 3.33kB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497139 image ls --format table --alsologtostderr:
I1209 02:06:20.846128   74215 out.go:360] Setting OutFile to fd 1 ...
I1209 02:06:20.846480   74215 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:06:20.846493   74215 out.go:374] Setting ErrFile to fd 2...
I1209 02:06:20.846500   74215 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:06:20.846817   74215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
I1209 02:06:20.847552   74215 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:06:20.847710   74215 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:06:20.848358   74215 cli_runner.go:164] Run: docker container inspect functional-497139 --format={{.State.Status}}
I1209 02:06:20.872711   74215 ssh_runner.go:195] Run: systemctl --version
I1209 02:06:20.872809   74215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-497139
I1209 02:06:20.898424   74215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/functional-497139/id_rsa Username:docker}
I1209 02:06:21.002490   74215 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497139 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-497139
size: "4945146"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6caa8e569bf957f95a07167b47f9e9f7707039441c3ab639b7efbb0ea10b4ca0
repoDigests:
- localhost/minikube-local-cache-test@sha256:457ac860687594fe6ce1a9c8ed7a8f1ccece090cbfc493aa3529b75f047e5a00
repoTags:
- localhost/minikube-local-cache-test:functional-497139
size: "3330"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497139 image ls --format yaml --alsologtostderr:
I1209 02:06:17.634486   73463 out.go:360] Setting OutFile to fd 1 ...
I1209 02:06:17.634749   73463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:06:17.634757   73463 out.go:374] Setting ErrFile to fd 2...
I1209 02:06:17.634761   73463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:06:17.634941   73463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
I1209 02:06:17.635475   73463 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:06:17.635570   73463 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:06:17.635966   73463 cli_runner.go:164] Run: docker container inspect functional-497139 --format={{.State.Status}}
I1209 02:06:17.653961   73463 ssh_runner.go:195] Run: systemctl --version
I1209 02:06:17.654009   73463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-497139
I1209 02:06:17.671183   73463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/functional-497139/id_rsa Username:docker}
I1209 02:06:17.761776   73463 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (7.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497139 ssh pgrep buildkitd: exit status 1 (257.570622ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image build -t localhost/my-image:functional-497139 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-497139 image build -t localhost/my-image:functional-497139 testdata/build --alsologtostderr: (7.2974971s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497139 image build -t localhost/my-image:functional-497139 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f963cc467ad
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-497139
--> ecad4a4f4c1
Successfully tagged localhost/my-image:functional-497139
ecad4a4f4c1c848b4d838ef725e21f4dde7071cc8c099548a7ea0762826f1790
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497139 image build -t localhost/my-image:functional-497139 testdata/build --alsologtostderr:
I1209 02:06:18.110745   73627 out.go:360] Setting OutFile to fd 1 ...
I1209 02:06:18.111028   73627 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:06:18.111038   73627 out.go:374] Setting ErrFile to fd 2...
I1209 02:06:18.111045   73627 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:06:18.111218   73627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
I1209 02:06:18.111770   73627 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:06:18.112352   73627 config.go:182] Loaded profile config "functional-497139": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:06:18.112805   73627 cli_runner.go:164] Run: docker container inspect functional-497139 --format={{.State.Status}}
I1209 02:06:18.131060   73627 ssh_runner.go:195] Run: systemctl --version
I1209 02:06:18.131102   73627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-497139
I1209 02:06:18.146569   73627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/functional-497139/id_rsa Username:docker}
I1209 02:06:18.236839   73627 build_images.go:162] Building image from path: /tmp/build.3829837003.tar
I1209 02:06:18.236909   73627 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 02:06:18.245173   73627 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3829837003.tar
I1209 02:06:18.248734   73627 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3829837003.tar: stat -c "%s %y" /var/lib/minikube/build/build.3829837003.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3829837003.tar': No such file or directory
I1209 02:06:18.248759   73627 ssh_runner.go:362] scp /tmp/build.3829837003.tar --> /var/lib/minikube/build/build.3829837003.tar (3072 bytes)
I1209 02:06:18.266876   73627 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3829837003
I1209 02:06:18.274907   73627 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3829837003 -xf /var/lib/minikube/build/build.3829837003.tar
I1209 02:06:18.282796   73627 crio.go:315] Building image: /var/lib/minikube/build/build.3829837003
I1209 02:06:18.282856   73627 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-497139 /var/lib/minikube/build/build.3829837003 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1209 02:06:25.324843   73627 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-497139 /var/lib/minikube/build/build.3829837003 --cgroup-manager=cgroupfs: (7.041939527s)
I1209 02:06:25.324927   73627 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3829837003
I1209 02:06:25.332943   73627 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3829837003.tar
I1209 02:06:25.340596   73627 build_images.go:218] Built localhost/my-image:functional-497139 from /tmp/build.3829837003.tar
I1209 02:06:25.340626   73627 build_images.go:134] succeeded building to: functional-497139
I1209 02:06:25.340642   73627 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (7.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-497139
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image load --daemon kicbase/echo-server:functional-497139 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image load --daemon kicbase/echo-server:functional-497139 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-497139 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-497139 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-497139 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 66839: os: process already finished
helpers_test.go:519: unable to terminate pid 66615: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-497139 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-497139 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (8.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-497139 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [a830f137-df33-4600-8466-1cb7fb473b0b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [a830f137-df33-4600-8466-1cb7fb473b0b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003186189s
I1209 02:06:07.545740   14552 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (8.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-497139
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image load --daemon kicbase/echo-server:functional-497139 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image save kicbase/echo-server:functional-497139 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image rm kicbase/echo-server:functional-497139 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-497139
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 image save --daemon kicbase/echo-server:functional-497139 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-497139
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "313.043339ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "62.285085ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "339.855035ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.206843ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 service list -o json
functional_test.go:1504: Took "349.682057ms" to run "out/minikube-linux-amd64 -p functional-497139 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1614721683/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765245965206616802" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1614721683/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765245965206616802" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1614721683/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765245965206616802" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1614721683/001/test-1765245965206616802
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.635711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:06:05.492677   14552 retry.go:31] will retry after 397.32637ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 02:06 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 02:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 02:06 test-1765245965206616802
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh cat /mount-9p/test-1765245965206616802
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-497139 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b417dba5-b628-4d48-8c11-739a259a2929] Pending
helpers_test.go:352: "busybox-mount" [b417dba5-b628-4d48-8c11-739a259a2929] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b417dba5-b628-4d48-8c11-739a259a2929] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b417dba5-b628-4d48-8c11-739a259a2929] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003936336s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-497139 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1614721683/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30855
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30855
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-497139 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.107.218 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-497139 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2580615956/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (330.107745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:06:11.351464   14552 retry.go:31] will retry after 302.706295ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2580615956/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497139 ssh "sudo umount -f /mount-9p": exit status 1 (262.775754ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-497139 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2580615956/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3740318132/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3740318132/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3740318132/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T" /mount1: exit status 1 (383.088901ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:06:13.075738   14552 retry.go:31] will retry after 738.909488ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-497139 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3740318132/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3740318132/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497139 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3740318132/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-497139 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-497139
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-497139
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-497139
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (146.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1209 02:07:18.694839   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:07:46.398854   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:06.552730   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:06.559087   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:06.570443   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:06.591790   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:06.633116   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:06.714467   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:06.876193   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:07.197784   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:07.839773   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:09.121386   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:11.683234   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:16.805282   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:27.047422   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:08:47.529358   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m25.71536847s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (146.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 kubectl -- rollout status deployment/busybox: (2.05062858s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-f72pp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-jkvw4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-nqjhp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-f72pp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-jkvw4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-nqjhp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-f72pp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-jkvw4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-nqjhp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-f72pp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-f72pp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-jkvw4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-jkvw4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-nqjhp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 kubectl -- exec busybox-7b57f96db7-nqjhp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 node add --alsologtostderr -v 5
E1209 02:09:28.491492   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 node add --alsologtostderr -v 5: (52.530834298s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-599798 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp testdata/cp-test.txt ha-599798:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile745211518/001/cp-test_ha-599798.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798:/home/docker/cp-test.txt ha-599798-m02:/home/docker/cp-test_ha-599798_ha-599798-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m02 "sudo cat /home/docker/cp-test_ha-599798_ha-599798-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798:/home/docker/cp-test.txt ha-599798-m03:/home/docker/cp-test_ha-599798_ha-599798-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m03 "sudo cat /home/docker/cp-test_ha-599798_ha-599798-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798:/home/docker/cp-test.txt ha-599798-m04:/home/docker/cp-test_ha-599798_ha-599798-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m04 "sudo cat /home/docker/cp-test_ha-599798_ha-599798-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp testdata/cp-test.txt ha-599798-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile745211518/001/cp-test_ha-599798-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m02:/home/docker/cp-test.txt ha-599798:/home/docker/cp-test_ha-599798-m02_ha-599798.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798 "sudo cat /home/docker/cp-test_ha-599798-m02_ha-599798.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m02:/home/docker/cp-test.txt ha-599798-m03:/home/docker/cp-test_ha-599798-m02_ha-599798-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m03 "sudo cat /home/docker/cp-test_ha-599798-m02_ha-599798-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m02:/home/docker/cp-test.txt ha-599798-m04:/home/docker/cp-test_ha-599798-m02_ha-599798-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m04 "sudo cat /home/docker/cp-test_ha-599798-m02_ha-599798-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp testdata/cp-test.txt ha-599798-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile745211518/001/cp-test_ha-599798-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m03:/home/docker/cp-test.txt ha-599798:/home/docker/cp-test_ha-599798-m03_ha-599798.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798 "sudo cat /home/docker/cp-test_ha-599798-m03_ha-599798.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m03:/home/docker/cp-test.txt ha-599798-m02:/home/docker/cp-test_ha-599798-m03_ha-599798-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m02 "sudo cat /home/docker/cp-test_ha-599798-m03_ha-599798-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m03:/home/docker/cp-test.txt ha-599798-m04:/home/docker/cp-test_ha-599798-m03_ha-599798-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m04 "sudo cat /home/docker/cp-test_ha-599798-m03_ha-599798-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp testdata/cp-test.txt ha-599798-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile745211518/001/cp-test_ha-599798-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m04:/home/docker/cp-test.txt ha-599798:/home/docker/cp-test_ha-599798-m04_ha-599798.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798 "sudo cat /home/docker/cp-test_ha-599798-m04_ha-599798.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m04:/home/docker/cp-test.txt ha-599798-m02:/home/docker/cp-test_ha-599798-m04_ha-599798-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m02 "sudo cat /home/docker/cp-test_ha-599798-m04_ha-599798-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 cp ha-599798-m04:/home/docker/cp-test.txt ha-599798-m03:/home/docker/cp-test_ha-599798-m04_ha-599798-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 ssh -n ha-599798-m03 "sudo cat /home/docker/cp-test_ha-599798-m04_ha-599798-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (18.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 node stop m02 --alsologtostderr -v 5: (18.049526434s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-599798 status --alsologtostderr -v 5: exit status 7 (663.875622ms)

                                                
                                                
-- stdout --
	ha-599798
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-599798-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-599798-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-599798-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:10:43.116033   94784 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:10:43.116316   94784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:10:43.116326   94784 out.go:374] Setting ErrFile to fd 2...
	I1209 02:10:43.116330   94784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:10:43.116784   94784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:10:43.116958   94784 out.go:368] Setting JSON to false
	I1209 02:10:43.116982   94784 mustload.go:66] Loading cluster: ha-599798
	I1209 02:10:43.117027   94784 notify.go:221] Checking for updates...
	I1209 02:10:43.117331   94784 config.go:182] Loaded profile config "ha-599798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:10:43.117346   94784 status.go:174] checking status of ha-599798 ...
	I1209 02:10:43.117814   94784 cli_runner.go:164] Run: docker container inspect ha-599798 --format={{.State.Status}}
	I1209 02:10:43.136458   94784 status.go:371] ha-599798 host status = "Running" (err=<nil>)
	I1209 02:10:43.136496   94784 host.go:66] Checking if "ha-599798" exists ...
	I1209 02:10:43.136867   94784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-599798
	I1209 02:10:43.153804   94784 host.go:66] Checking if "ha-599798" exists ...
	I1209 02:10:43.154040   94784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:10:43.154084   94784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-599798
	I1209 02:10:43.170620   94784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/ha-599798/id_rsa Username:docker}
	I1209 02:10:43.260505   94784 ssh_runner.go:195] Run: systemctl --version
	I1209 02:10:43.266658   94784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:10:43.279254   94784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:10:43.334320   94784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-09 02:10:43.32324734 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:10:43.335081   94784 kubeconfig.go:125] found "ha-599798" server: "https://192.168.49.254:8443"
	I1209 02:10:43.335122   94784 api_server.go:166] Checking apiserver status ...
	I1209 02:10:43.335174   94784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:10:43.346386   94784 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup
	W1209 02:10:43.354389   94784 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:10:43.354433   94784 ssh_runner.go:195] Run: ls
	I1209 02:10:43.357878   94784 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 02:10:43.363530   94784 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 02:10:43.363553   94784 status.go:463] ha-599798 apiserver status = Running (err=<nil>)
	I1209 02:10:43.363563   94784 status.go:176] ha-599798 status: &{Name:ha-599798 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:10:43.363582   94784 status.go:174] checking status of ha-599798-m02 ...
	I1209 02:10:43.363902   94784 cli_runner.go:164] Run: docker container inspect ha-599798-m02 --format={{.State.Status}}
	I1209 02:10:43.381412   94784 status.go:371] ha-599798-m02 host status = "Stopped" (err=<nil>)
	I1209 02:10:43.381433   94784 status.go:384] host is not running, skipping remaining checks
	I1209 02:10:43.381440   94784 status.go:176] ha-599798-m02 status: &{Name:ha-599798-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:10:43.381458   94784 status.go:174] checking status of ha-599798-m03 ...
	I1209 02:10:43.381772   94784 cli_runner.go:164] Run: docker container inspect ha-599798-m03 --format={{.State.Status}}
	I1209 02:10:43.400421   94784 status.go:371] ha-599798-m03 host status = "Running" (err=<nil>)
	I1209 02:10:43.400441   94784 host.go:66] Checking if "ha-599798-m03" exists ...
	I1209 02:10:43.400709   94784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-599798-m03
	I1209 02:10:43.416900   94784 host.go:66] Checking if "ha-599798-m03" exists ...
	I1209 02:10:43.417171   94784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:10:43.417217   94784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-599798-m03
	I1209 02:10:43.433508   94784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/ha-599798-m03/id_rsa Username:docker}
	I1209 02:10:43.525111   94784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:10:43.537359   94784 kubeconfig.go:125] found "ha-599798" server: "https://192.168.49.254:8443"
	I1209 02:10:43.537382   94784 api_server.go:166] Checking apiserver status ...
	I1209 02:10:43.537408   94784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:10:43.547182   94784 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W1209 02:10:43.554801   94784 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:10:43.554839   94784 ssh_runner.go:195] Run: ls
	I1209 02:10:43.558212   94784 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 02:10:43.562335   94784 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 02:10:43.562353   94784 status.go:463] ha-599798-m03 apiserver status = Running (err=<nil>)
	I1209 02:10:43.562360   94784 status.go:176] ha-599798-m03 status: &{Name:ha-599798-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:10:43.562373   94784 status.go:174] checking status of ha-599798-m04 ...
	I1209 02:10:43.562570   94784 cli_runner.go:164] Run: docker container inspect ha-599798-m04 --format={{.State.Status}}
	I1209 02:10:43.581023   94784 status.go:371] ha-599798-m04 host status = "Running" (err=<nil>)
	I1209 02:10:43.581040   94784 host.go:66] Checking if "ha-599798-m04" exists ...
	I1209 02:10:43.581276   94784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-599798-m04
	I1209 02:10:43.599226   94784 host.go:66] Checking if "ha-599798-m04" exists ...
	I1209 02:10:43.599492   94784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:10:43.599527   94784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-599798-m04
	I1209 02:10:43.620469   94784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/ha-599798-m04/id_rsa Username:docker}
	I1209 02:10:43.709158   94784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:10:43.720878   94784 status.go:176] ha-599798-m04 status: &{Name:ha-599798-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (18.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 node start m02 --alsologtostderr -v 5
E1209 02:10:50.413691   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:10:56.851840   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:10:56.858206   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:10:56.869511   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:10:56.890847   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:10:56.932159   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:10:57.013508   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:10:57.175021   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:10:57.496721   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:10:58.138739   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 node start m02 --alsologtostderr -v 5: (13.9333546s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1209 02:10:59.420811   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 stop --alsologtostderr -v 5
E1209 02:11:01.982339   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:11:07.103583   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:11:17.345500   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:11:37.827340   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 stop --alsologtostderr -v 5: (50.128716844s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 start --wait true --alsologtostderr -v 5
E1209 02:12:18.695045   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:12:18.789499   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 start --wait true --alsologtostderr -v 5: (58.504310287s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 node delete m03 --alsologtostderr -v 5: (9.651458324s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 stop --alsologtostderr -v 5
E1209 02:13:06.551839   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:13:34.255137   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:13:40.712526   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 stop --alsologtostderr -v 5: (43.747464115s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-599798 status --alsologtostderr -v 5: exit status 7 (113.41431ms)

                                                
                                                
-- stdout --
	ha-599798
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-599798-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-599798-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:13:43.735784  109109 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:13:43.735868  109109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:13:43.735876  109109 out.go:374] Setting ErrFile to fd 2...
	I1209 02:13:43.735880  109109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:13:43.736088  109109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:13:43.736258  109109 out.go:368] Setting JSON to false
	I1209 02:13:43.736280  109109 mustload.go:66] Loading cluster: ha-599798
	I1209 02:13:43.736365  109109 notify.go:221] Checking for updates...
	I1209 02:13:43.736616  109109 config.go:182] Loaded profile config "ha-599798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:13:43.736629  109109 status.go:174] checking status of ha-599798 ...
	I1209 02:13:43.737067  109109 cli_runner.go:164] Run: docker container inspect ha-599798 --format={{.State.Status}}
	I1209 02:13:43.758740  109109 status.go:371] ha-599798 host status = "Stopped" (err=<nil>)
	I1209 02:13:43.758783  109109 status.go:384] host is not running, skipping remaining checks
	I1209 02:13:43.758792  109109 status.go:176] ha-599798 status: &{Name:ha-599798 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:13:43.758835  109109 status.go:174] checking status of ha-599798-m02 ...
	I1209 02:13:43.759054  109109 cli_runner.go:164] Run: docker container inspect ha-599798-m02 --format={{.State.Status}}
	I1209 02:13:43.775878  109109 status.go:371] ha-599798-m02 host status = "Stopped" (err=<nil>)
	I1209 02:13:43.775895  109109 status.go:384] host is not running, skipping remaining checks
	I1209 02:13:43.775900  109109 status.go:176] ha-599798-m02 status: &{Name:ha-599798-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:13:43.775916  109109 status.go:174] checking status of ha-599798-m04 ...
	I1209 02:13:43.776137  109109 cli_runner.go:164] Run: docker container inspect ha-599798-m04 --format={{.State.Status}}
	I1209 02:13:43.792156  109109 status.go:371] ha-599798-m04 host status = "Stopped" (err=<nil>)
	I1209 02:13:43.792180  109109 status.go:384] host is not running, skipping remaining checks
	I1209 02:13:43.792185  109109 status.go:176] ha-599798-m04 status: &{Name:ha-599798-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (54.518453413s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-599798 node add --control-plane --alsologtostderr -v 5: (40.822710844s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-599798 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-792692 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1209 02:15:56.852268   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-792692 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.952927093s)
--- PASS: TestJSONOutput/start/Command (37.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-792692 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-792692 --output=json --user=testUser: (6.040353993s)
--- PASS: TestJSONOutput/stop/Command (6.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-629488 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-629488 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.981345ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a617b10e-7f5b-4543-852e-fb62172dd0d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-629488] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b5e3ca6-22f1-4381-907b-cf5989281a7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22081"}}
	{"specversion":"1.0","id":"b9ad7a48-f260-42d7-83f2-7d3c71a311d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e8329e43-d36c-4eee-9f2b-eb3e31636362","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig"}}
	{"specversion":"1.0","id":"712572c7-49d3-42c4-a110-d7b3d427fd3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube"}}
	{"specversion":"1.0","id":"ac0a0936-a2c7-45d0-8048-393d298d7b5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"aad1d9c6-75fa-4be6-93d2-da2a88a69ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"160a2584-7e4e-4250-9eaa-c8b4ec423a1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-629488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-629488
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-384020 --network=
E1209 02:16:24.554530   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-384020 --network=: (27.678576099s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-384020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-384020
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-384020: (2.091742447s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-230856 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-230856 --network=bridge: (19.020218994s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-230856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-230856
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-230856: (1.970135693s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.01s)

                                                
                                    
x
+
TestKicExistingNetwork (23.25s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1209 02:17:14.176970   14552 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1209 02:17:14.192386   14552 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1209 02:17:14.192449   14552 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1209 02:17:14.192465   14552 cli_runner.go:164] Run: docker network inspect existing-network
W1209 02:17:14.208116   14552 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1209 02:17:14.208139   14552 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1209 02:17:14.208159   14552 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1209 02:17:14.208301   14552 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 02:17:14.224294   14552 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7c7eef89e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:09:73:f8:8d:c9} reservation:<nil>}
I1209 02:17:14.224671   14552 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fe5440}
I1209 02:17:14.224703   14552 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1209 02:17:14.224749   14552 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1209 02:17:14.269390   14552 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-411311 --network=existing-network
E1209 02:17:18.694881   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-411311 --network=existing-network: (21.155932295s)
helpers_test.go:175: Cleaning up "existing-network-411311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-411311
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-411311: (1.967214325s)
I1209 02:17:37.408805   14552 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.25s)

                                                
                                    
x
+
TestKicCustomSubnet (23.6s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-416402 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-416402 --subnet=192.168.60.0/24: (21.469525042s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-416402 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-416402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-416402
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-416402: (2.111012311s)
--- PASS: TestKicCustomSubnet (23.60s)

                                                
                                    
x
+
TestKicStaticIP (25.7s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-709697 --static-ip=192.168.200.200
E1209 02:18:06.556773   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-709697 --static-ip=192.168.200.200: (23.461987781s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-709697 ip
helpers_test.go:175: Cleaning up "static-ip-709697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-709697
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-709697: (2.089416728s)
--- PASS: TestKicStaticIP (25.70s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (44.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-801168 --driver=docker  --container-runtime=crio
E1209 02:18:41.760659   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-801168 --driver=docker  --container-runtime=crio: (20.412419531s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-803944 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-803944 --driver=docker  --container-runtime=crio: (18.738561492s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-801168
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-803944
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-803944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-803944
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-803944: (2.293665949s)
helpers_test.go:175: Cleaning up "first-801168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-801168
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-801168: (2.312805745s)
--- PASS: TestMinikubeProfile (44.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-286791 --memory=3072 --mount-string /tmp/TestMountStartserial3670907855/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-286791 --memory=3072 --mount-string /tmp/TestMountStartserial3670907855/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.478300273s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-286791 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-300531 --memory=3072 --mount-string /tmp/TestMountStartserial3670907855/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-300531 --memory=3072 --mount-string /tmp/TestMountStartserial3670907855/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.493030844s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-300531 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-286791 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-286791 --alsologtostderr -v=5: (1.658012088s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-300531 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-300531
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-300531: (1.243873433s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-300531
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-300531: (6.121088348s)
--- PASS: TestMountStart/serial/RestartStopped (7.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-300531 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-175236 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1209 02:20:56.851806   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-175236 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.735496777s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-175236 -- rollout status deployment/busybox: (1.859428349s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- exec busybox-7b57f96db7-jxrsj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- exec busybox-7b57f96db7-wqbr9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- exec busybox-7b57f96db7-jxrsj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- exec busybox-7b57f96db7-wqbr9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- exec busybox-7b57f96db7-jxrsj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- exec busybox-7b57f96db7-wqbr9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- exec busybox-7b57f96db7-jxrsj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- exec busybox-7b57f96db7-jxrsj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- exec busybox-7b57f96db7-wqbr9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175236 -- exec busybox-7b57f96db7-wqbr9 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-175236 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-175236 -v=5 --alsologtostderr: (55.039437791s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-175236 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp testdata/cp-test.txt multinode-175236:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp multinode-175236:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3776167190/001/cp-test_multinode-175236.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp multinode-175236:/home/docker/cp-test.txt multinode-175236-m02:/home/docker/cp-test_multinode-175236_multinode-175236-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m02 "sudo cat /home/docker/cp-test_multinode-175236_multinode-175236-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp multinode-175236:/home/docker/cp-test.txt multinode-175236-m03:/home/docker/cp-test_multinode-175236_multinode-175236-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m03 "sudo cat /home/docker/cp-test_multinode-175236_multinode-175236-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp testdata/cp-test.txt multinode-175236-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp multinode-175236-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3776167190/001/cp-test_multinode-175236-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp multinode-175236-m02:/home/docker/cp-test.txt multinode-175236:/home/docker/cp-test_multinode-175236-m02_multinode-175236.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236 "sudo cat /home/docker/cp-test_multinode-175236-m02_multinode-175236.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp multinode-175236-m02:/home/docker/cp-test.txt multinode-175236-m03:/home/docker/cp-test_multinode-175236-m02_multinode-175236-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m03 "sudo cat /home/docker/cp-test_multinode-175236-m02_multinode-175236-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp testdata/cp-test.txt multinode-175236-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp multinode-175236-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3776167190/001/cp-test_multinode-175236-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp multinode-175236-m03:/home/docker/cp-test.txt multinode-175236:/home/docker/cp-test_multinode-175236-m03_multinode-175236.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236 "sudo cat /home/docker/cp-test_multinode-175236-m03_multinode-175236.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 cp multinode-175236-m03:/home/docker/cp-test.txt multinode-175236-m02:/home/docker/cp-test_multinode-175236-m03_multinode-175236-m02.txt
E1209 02:22:18.694368   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 ssh -n multinode-175236-m02 "sudo cat /home/docker/cp-test_multinode-175236-m03_multinode-175236-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-175236 node stop m03: (1.243494104s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-175236 status: exit status 7 (476.140673ms)

                                                
                                                
-- stdout --
	multinode-175236
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-175236-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-175236-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-175236 status --alsologtostderr: exit status 7 (467.840216ms)

                                                
                                                
-- stdout --
	multinode-175236
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-175236-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-175236-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:22:21.070430  169513 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:22:21.070531  169513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:22:21.070539  169513 out.go:374] Setting ErrFile to fd 2...
	I1209 02:22:21.070543  169513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:22:21.070755  169513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:22:21.070919  169513 out.go:368] Setting JSON to false
	I1209 02:22:21.070943  169513 mustload.go:66] Loading cluster: multinode-175236
	I1209 02:22:21.071006  169513 notify.go:221] Checking for updates...
	I1209 02:22:21.071427  169513 config.go:182] Loaded profile config "multinode-175236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:22:21.071446  169513 status.go:174] checking status of multinode-175236 ...
	I1209 02:22:21.072081  169513 cli_runner.go:164] Run: docker container inspect multinode-175236 --format={{.State.Status}}
	I1209 02:22:21.090936  169513 status.go:371] multinode-175236 host status = "Running" (err=<nil>)
	I1209 02:22:21.090956  169513 host.go:66] Checking if "multinode-175236" exists ...
	I1209 02:22:21.091254  169513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-175236
	I1209 02:22:21.107943  169513 host.go:66] Checking if "multinode-175236" exists ...
	I1209 02:22:21.108228  169513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:22:21.108275  169513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-175236
	I1209 02:22:21.124347  169513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/multinode-175236/id_rsa Username:docker}
	I1209 02:22:21.213303  169513 ssh_runner.go:195] Run: systemctl --version
	I1209 02:22:21.219321  169513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:22:21.230697  169513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:22:21.285986  169513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-09 02:22:21.276378906 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:22:21.286610  169513 kubeconfig.go:125] found "multinode-175236" server: "https://192.168.67.2:8443"
	I1209 02:22:21.286654  169513 api_server.go:166] Checking apiserver status ...
	I1209 02:22:21.286709  169513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:22:21.298071  169513 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	W1209 02:22:21.305992  169513 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:22:21.306041  169513 ssh_runner.go:195] Run: ls
	I1209 02:22:21.309442  169513 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1209 02:22:21.313425  169513 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1209 02:22:21.313442  169513 status.go:463] multinode-175236 apiserver status = Running (err=<nil>)
	I1209 02:22:21.313450  169513 status.go:176] multinode-175236 status: &{Name:multinode-175236 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:22:21.313466  169513 status.go:174] checking status of multinode-175236-m02 ...
	I1209 02:22:21.313701  169513 cli_runner.go:164] Run: docker container inspect multinode-175236-m02 --format={{.State.Status}}
	I1209 02:22:21.331157  169513 status.go:371] multinode-175236-m02 host status = "Running" (err=<nil>)
	I1209 02:22:21.331174  169513 host.go:66] Checking if "multinode-175236-m02" exists ...
	I1209 02:22:21.331434  169513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-175236-m02
	I1209 02:22:21.348714  169513 host.go:66] Checking if "multinode-175236-m02" exists ...
	I1209 02:22:21.348992  169513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:22:21.349035  169513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-175236-m02
	I1209 02:22:21.365556  169513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22081-11001/.minikube/machines/multinode-175236-m02/id_rsa Username:docker}
	I1209 02:22:21.454115  169513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:22:21.465580  169513 status.go:176] multinode-175236-m02 status: &{Name:multinode-175236-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:22:21.465615  169513 status.go:174] checking status of multinode-175236-m03 ...
	I1209 02:22:21.465876  169513 cli_runner.go:164] Run: docker container inspect multinode-175236-m03 --format={{.State.Status}}
	I1209 02:22:21.482838  169513 status.go:371] multinode-175236-m03 host status = "Stopped" (err=<nil>)
	I1209 02:22:21.482865  169513 status.go:384] host is not running, skipping remaining checks
	I1209 02:22:21.482872  169513 status.go:176] multinode-175236-m03 status: &{Name:multinode-175236-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-175236 node start m03 -v=5 --alsologtostderr: (6.280027046s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-175236
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-175236
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-175236: (31.347712196s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-175236 --wait=true -v=5 --alsologtostderr
E1209 02:23:06.552166   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-175236 --wait=true -v=5 --alsologtostderr: (47.907267418s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-175236
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-175236 node delete m03: (4.574180512s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-175236 stop: (30.039859593s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-175236 status: exit status 7 (93.147723ms)

                                                
                                                
-- stdout --
	multinode-175236
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-175236-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-175236 status --alsologtostderr: exit status 7 (91.537025ms)

                                                
                                                
-- stdout --
	multinode-175236
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-175236-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:24:23.143835  179367 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:24:23.143950  179367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:24:23.143958  179367 out.go:374] Setting ErrFile to fd 2...
	I1209 02:24:23.143962  179367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:24:23.144131  179367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:24:23.144283  179367 out.go:368] Setting JSON to false
	I1209 02:24:23.144305  179367 mustload.go:66] Loading cluster: multinode-175236
	I1209 02:24:23.144428  179367 notify.go:221] Checking for updates...
	I1209 02:24:23.144693  179367 config.go:182] Loaded profile config "multinode-175236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:24:23.144709  179367 status.go:174] checking status of multinode-175236 ...
	I1209 02:24:23.145205  179367 cli_runner.go:164] Run: docker container inspect multinode-175236 --format={{.State.Status}}
	I1209 02:24:23.163078  179367 status.go:371] multinode-175236 host status = "Stopped" (err=<nil>)
	I1209 02:24:23.163097  179367 status.go:384] host is not running, skipping remaining checks
	I1209 02:24:23.163102  179367 status.go:176] multinode-175236 status: &{Name:multinode-175236 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:24:23.163120  179367 status.go:174] checking status of multinode-175236-m02 ...
	I1209 02:24:23.163372  179367 cli_runner.go:164] Run: docker container inspect multinode-175236-m02 --format={{.State.Status}}
	I1209 02:24:23.179862  179367 status.go:371] multinode-175236-m02 host status = "Stopped" (err=<nil>)
	I1209 02:24:23.179877  179367 status.go:384] host is not running, skipping remaining checks
	I1209 02:24:23.179882  179367 status.go:176] multinode-175236-m02 status: &{Name:multinode-175236-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-175236 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1209 02:24:29.617368   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-175236 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.345170427s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175236 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-175236
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-175236-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-175236-m02 --driver=docker  --container-runtime=crio: exit status 14 (73.763949ms)

                                                
                                                
-- stdout --
	* [multinode-175236-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-175236-m02' is duplicated with machine name 'multinode-175236-m02' in profile 'multinode-175236'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-175236-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-175236-m03 --driver=docker  --container-runtime=crio: (19.212790859s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-175236
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-175236: exit status 80 (274.063115ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-175236 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-175236-m03 already exists in multinode-175236-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-175236-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-175236-m03: (2.289732588s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.91s)

                                                
                                    
x
+
TestPreload (106.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-066238 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1209 02:25:56.852208   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-066238 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (47.835698402s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-066238 image pull gcr.io/k8s-minikube/busybox
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-066238
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-066238: (7.915323279s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-066238 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1209 02:27:18.695332   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:27:19.916715   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-066238 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (47.378756814s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-066238 image list
helpers_test.go:175: Cleaning up "test-preload-066238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-066238
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-066238: (2.35143723s)
--- PASS: TestPreload (106.47s)

                                                
                                    
x
+
TestScheduledStopUnix (93.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-155628 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-155628 --memory=3072 --driver=docker  --container-runtime=crio: (18.096169631s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-155628 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 02:27:40.767892  196516 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:27:40.768124  196516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:27:40.768132  196516 out.go:374] Setting ErrFile to fd 2...
	I1209 02:27:40.768137  196516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:27:40.768345  196516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:27:40.768553  196516 out.go:368] Setting JSON to false
	I1209 02:27:40.768648  196516 mustload.go:66] Loading cluster: scheduled-stop-155628
	I1209 02:27:40.768952  196516 config.go:182] Loaded profile config "scheduled-stop-155628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:27:40.769021  196516 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/config.json ...
	I1209 02:27:40.769175  196516 mustload.go:66] Loading cluster: scheduled-stop-155628
	I1209 02:27:40.769264  196516 config.go:182] Loaded profile config "scheduled-stop-155628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-155628 -n scheduled-stop-155628
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 02:27:41.149765  196667 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:27:41.149855  196667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:27:41.149859  196667 out.go:374] Setting ErrFile to fd 2...
	I1209 02:27:41.149863  196667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:27:41.150054  196667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:27:41.150289  196667 out.go:368] Setting JSON to false
	I1209 02:27:41.150451  196667 daemonize_unix.go:73] killing process 196551 as it is an old scheduled stop
	I1209 02:27:41.150562  196667 mustload.go:66] Loading cluster: scheduled-stop-155628
	I1209 02:27:41.150874  196667 config.go:182] Loaded profile config "scheduled-stop-155628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:27:41.150939  196667 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/config.json ...
	I1209 02:27:41.151118  196667 mustload.go:66] Loading cluster: scheduled-stop-155628
	I1209 02:27:41.151214  196667 config.go:182] Loaded profile config "scheduled-stop-155628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1209 02:27:41.157351   14552 retry.go:31] will retry after 81.507µs: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.158491   14552 retry.go:31] will retry after 147.205µs: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.159664   14552 retry.go:31] will retry after 193.466µs: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.160797   14552 retry.go:31] will retry after 270.152µs: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.161918   14552 retry.go:31] will retry after 389.115µs: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.163024   14552 retry.go:31] will retry after 465.979µs: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.164148   14552 retry.go:31] will retry after 626.068µs: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.165273   14552 retry.go:31] will retry after 2.260288ms: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.168457   14552 retry.go:31] will retry after 2.037735ms: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.170586   14552 retry.go:31] will retry after 3.95069ms: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.174774   14552 retry.go:31] will retry after 5.044626ms: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.179902   14552 retry.go:31] will retry after 7.854813ms: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.188116   14552 retry.go:31] will retry after 11.676592ms: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.200399   14552 retry.go:31] will retry after 15.198754ms: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.216560   14552 retry.go:31] will retry after 21.696587ms: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
I1209 02:27:41.238771   14552 retry.go:31] will retry after 28.194568ms: open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-155628 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-155628 -n scheduled-stop-155628
E1209 02:28:06.552703   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-155628
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-155628 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 02:28:06.998933  197233 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:28:06.999033  197233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:28:06.999037  197233 out.go:374] Setting ErrFile to fd 2...
	I1209 02:28:06.999041  197233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:28:06.999255  197233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:28:06.999460  197233 out.go:368] Setting JSON to false
	I1209 02:28:06.999528  197233 mustload.go:66] Loading cluster: scheduled-stop-155628
	I1209 02:28:06.999815  197233 config.go:182] Loaded profile config "scheduled-stop-155628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:28:06.999890  197233 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/scheduled-stop-155628/config.json ...
	I1209 02:28:07.000077  197233 mustload.go:66] Loading cluster: scheduled-stop-155628
	I1209 02:28:07.000173  197233 config.go:182] Loaded profile config "scheduled-stop-155628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-155628
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-155628: exit status 7 (76.661012ms)

                                                
                                                
-- stdout --
	scheduled-stop-155628
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-155628 -n scheduled-stop-155628
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-155628 -n scheduled-stop-155628: exit status 7 (75.704692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-155628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-155628
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-155628: (4.385211073s)
--- PASS: TestScheduledStopUnix (93.93s)

                                                
                                    
x
+
TestInsufficientStorage (8.58s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-342795 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-342795 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.164435754s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"63b79e61-ed91-404b-80d2-830bc468210e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-342795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"75725209-fafd-43d4-a4b7-63fb0286cc3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22081"}}
	{"specversion":"1.0","id":"2842daf1-3a10-4472-84e3-007ce6052a12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6e103beb-39f6-4cd5-84be-ae8059f5e15e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig"}}
	{"specversion":"1.0","id":"aeff6319-23c7-4d36-876a-5efc02d805da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube"}}
	{"specversion":"1.0","id":"f84cfa8c-82c6-4f7e-b9f6-8ec89cce2602","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"be22ae31-cdd6-4075-be82-a0ebdcb8938b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f2e9ac58-1484-490b-a115-869b22699403","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5a1ee7d7-ba00-44f4-a578-a18ac9adaf9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"36336eab-806a-4cb0-8711-5cd069406a61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a2da6456-62d1-4dc5-a269-4a7a86b5f4a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f668f8b3-1af9-4bf8-9f49-d10c844ab3e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-342795\" primary control-plane node in \"insufficient-storage-342795\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9504011-7bdb-4e22-81c2-295c041627f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765184860-22066 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"76cc72e2-320d-4f51-b69f-2beac92b1274","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f63dde5-b132-45ae-8b00-e9ce90386896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-342795 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-342795 --output=json --layout=cluster: exit status 7 (283.305652ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-342795","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-342795","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 02:29:02.978032  199738 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-342795" does not appear in /home/jenkins/minikube-integration/22081-11001/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-342795 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-342795 --output=json --layout=cluster: exit status 7 (273.306237ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-342795","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-342795","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 02:29:03.252439  199849 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-342795" does not appear in /home/jenkins/minikube-integration/22081-11001/kubeconfig
	E1209 02:29:03.262406  199849 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/insufficient-storage-342795/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-342795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-342795
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-342795: (1.854254862s)
--- PASS: TestInsufficientStorage (8.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (294.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2361738389 start -p running-upgrade-099378 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2361738389 start -p running-upgrade-099378 --memory=3072 --vm-driver=docker  --container-runtime=crio: (19.502572458s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-099378 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-099378 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.630977924s)
helpers_test.go:175: Cleaning up "running-upgrade-099378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-099378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-099378: (2.590798039s)
--- PASS: TestRunningBinaryUpgrade (294.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (295.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.389496723s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-190944
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-190944: (2.025725608s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-190944 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-190944 status --format={{.Host}}: exit status 7 (104.441028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.870370608s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-190944 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (80.927024ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-190944] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-190944
	    minikube start -p kubernetes-upgrade-190944 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1909442 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-190944 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-190944 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.246005202s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-190944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-190944
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-190944: (2.479496085s)
--- PASS: TestKubernetesUpgrade (295.28s)

                                                
                                    
x
+
TestMissingContainerUpgrade (89.08s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1324881280 start -p missing-upgrade-857664 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1324881280 start -p missing-upgrade-857664 --memory=3072 --driver=docker  --container-runtime=crio: (41.692092157s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-857664
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-857664: (1.778841128s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-857664
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-857664 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-857664 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.315496264s)
helpers_test.go:175: Cleaning up "missing-upgrade-857664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-857664
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-857664: (2.45479048s)
--- PASS: TestMissingContainerUpgrade (89.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestPause/serial/Start (57.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-752151 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-752151 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (57.087928726s)
--- PASS: TestPause/serial/Start (57.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (304.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.923810196 start -p stopped-upgrade-768415 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.923810196 start -p stopped-upgrade-768415 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.27874906s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.923810196 -p stopped-upgrade-768415 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.923810196 -p stopped-upgrade-768415 stop: (2.364965778s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-768415 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-768415 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m19.795155995s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (304.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-752151 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-752151 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.561337315s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-210390 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-210390 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (81.73249ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-210390] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (19.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-210390 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-210390 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (19.051010755s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-210390 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (19.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-210390 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-210390 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.632674906s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-210390 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-210390 status -o json: exit status 2 (292.888567ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-210390","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-210390
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-210390: (1.937389792s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-210390 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-210390 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.980695983s)
--- PASS: TestNoKubernetes/serial/Start (6.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22081-11001/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-210390 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-210390 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.661788ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (29.594661844s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (30.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-210390
E1209 02:32:18.695074   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-210390: (1.269858632s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-210390 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-210390 --driver=docker  --container-runtime=crio: (6.198379975s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-210390 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-210390 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.230968ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-933067 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-933067 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (155.849538ms)

                                                
                                                
-- stdout --
	* [false-933067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:32:31.037351  248427 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:32:31.037624  248427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:32:31.037647  248427 out.go:374] Setting ErrFile to fd 2...
	I1209 02:32:31.037653  248427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:32:31.037830  248427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-11001/.minikube/bin
	I1209 02:32:31.038312  248427 out.go:368] Setting JSON to false
	I1209 02:32:31.039416  248427 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4500,"bootTime":1765243051,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:32:31.039472  248427 start.go:143] virtualization: kvm guest
	I1209 02:32:31.041292  248427 out.go:179] * [false-933067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:32:31.042411  248427 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:32:31.042404  248427 notify.go:221] Checking for updates...
	I1209 02:32:31.044562  248427 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:32:31.045816  248427 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-11001/kubeconfig
	I1209 02:32:31.046860  248427 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-11001/.minikube
	I1209 02:32:31.047915  248427 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:32:31.048910  248427 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:32:31.050297  248427 config.go:182] Loaded profile config "kubernetes-upgrade-190944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:32:31.050383  248427 config.go:182] Loaded profile config "running-upgrade-099378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 02:32:31.050474  248427 config.go:182] Loaded profile config "stopped-upgrade-768415": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 02:32:31.050547  248427 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:32:31.072751  248427 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1209 02:32:31.072824  248427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 02:32:31.128306  248427 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-09 02:32:31.118447154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 02:32:31.128453  248427 docker.go:319] overlay module found
	I1209 02:32:31.130422  248427 out.go:179] * Using the docker driver based on user configuration
	I1209 02:32:31.131504  248427 start.go:309] selected driver: docker
	I1209 02:32:31.131520  248427 start.go:927] validating driver "docker" against <nil>
	I1209 02:32:31.131535  248427 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:32:31.133252  248427 out.go:203] 
	W1209 02:32:31.134337  248427 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1209 02:32:31.135406  248427 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-933067 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-933067" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 02:30:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-190944
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 02:30:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-099378
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 02:29:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-768415
contexts:
- context:
cluster: kubernetes-upgrade-190944
user: kubernetes-upgrade-190944
name: kubernetes-upgrade-190944
- context:
cluster: running-upgrade-099378
user: running-upgrade-099378
name: running-upgrade-099378
- context:
cluster: stopped-upgrade-768415
user: stopped-upgrade-768415
name: stopped-upgrade-768415
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-190944
user:
client-certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kubernetes-upgrade-190944/client.crt
client-key: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kubernetes-upgrade-190944/client.key
- name: running-upgrade-099378
user:
client-certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/running-upgrade-099378/client.crt
client-key: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/running-upgrade-099378/client.key
- name: stopped-upgrade-768415
user:
client-certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/stopped-upgrade-768415/client.crt
client-key: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/stopped-upgrade-768415/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-933067

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-933067"

                                                
                                                
----------------------- debugLogs end: false-933067 [took: 2.943087831s] --------------------------------
helpers_test.go:175: Cleaning up "false-933067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-933067
--- PASS: TestNetworkPlugins/group/false (3.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-768415
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-768415: (1.304673908s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.41708394s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (49.826294094s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1209 02:35:21.762593   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/addons-598284/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (41.511069523s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-126117 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [32ad79f5-6d8a-4a14-aefb-defd3600eb69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [32ad79f5-6d8a-4a14-aefb-defd3600eb69] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003900989s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-126117 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-512414 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ab74c108-2004-4878-a264-225156656ac5] Pending
helpers_test.go:352: "busybox" [ab74c108-2004-4878-a264-225156656ac5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ab74c108-2004-4878-a264-225156656ac5] Running
E1209 02:35:56.851654   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-497139/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003995481s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-512414 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-185074 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e17362a9-2cc3-4357-81a8-d1ec477fcb7f] Pending
helpers_test.go:352: "busybox" [e17362a9-2cc3-4357-81a8-d1ec477fcb7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e17362a9-2cc3-4357-81a8-d1ec477fcb7f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00357074s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-185074 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-126117 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-126117 --alsologtostderr -v=3: (16.064657712s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-512414 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-512414 --alsologtostderr -v=3: (18.114890215s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (21.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (21.278672541s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (21.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-185074 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-185074 --alsologtostderr -v=3: (18.214798725s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-126117 -n old-k8s-version-126117
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-126117 -n old-k8s-version-126117: exit status 7 (74.644762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-126117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-126117 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (45.933097966s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-126117 -n old-k8s-version-126117
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414: exit status 7 (80.32324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-512414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-512414 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (47.089213146s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-512414 -n default-k8s-diff-port-512414
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185074 -n no-preload-185074
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185074 -n no-preload-185074: exit status 7 (129.738019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-185074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-185074 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (46.391490114s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185074 -n no-preload-185074
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (14.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-828614 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-828614 --alsologtostderr -v=3: (14.8993414s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (14.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828614 -n newest-cni-828614
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828614 -n newest-cni-828614: exit status 7 (110.832432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-828614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-828614 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (9.797298775s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828614 -n newest-cni-828614
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-828614 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5rc6b" [4c6cd675-cc90-4ada-a2b0-7f4c03ef7b3a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00362177s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (39.41083542s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (39.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ktttw" [a2d8f564-44f9-4bad-8be1-7ea025ad2cf4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00354084s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5rc6b" [4c6cd675-cc90-4ada-a2b0-7f4c03ef7b3a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003376975s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-126117 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-kvvqg" [dd170717-d670-4d29-8af4-26119ab1028d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003061228s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ktttw" [a2d8f564-44f9-4bad-8be1-7ea025ad2cf4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005471294s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-512414 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-126117 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-512414 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-kvvqg" [dd170717-d670-4d29-8af4-26119ab1028d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003994473s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-185074 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-185074 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.727927649s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.049820769s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.637372023s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-485234 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a3353aeb-70fb-463b-850d-43e0507d25ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a3353aeb-70fb-463b-850d-43e0507d25ee] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003245664s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-485234 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (19.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-485234 --alsologtostderr -v=3
E1209 02:38:06.551797   14552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/functional-976894/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-485234 --alsologtostderr -v=3: (19.673168634s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (19.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-933067 "pgrep -a kubelet"
I1209 02:38:09.237907   14552 config.go:182] Loaded profile config "auto-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-933067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5jbt5" [369acdf9-e3c3-46cf-8a0e-4f1eecc15459] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5jbt5" [369acdf9-e3c3-46cf-8a0e-4f1eecc15459] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003285167s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-mlsmc" [28f2d9f6-bae7-4332-aa93-4698be8995ec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003698975s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-485234 -n embed-certs-485234
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-485234 -n embed-certs-485234: exit status 7 (83.206184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-485234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-485234 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (44.475748488s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-485234 -n embed-certs-485234
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-933067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-933067 "pgrep -a kubelet"
I1209 02:38:19.539154   14552 config.go:182] Loaded profile config "kindnet-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-933067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f8clq" [f203a2e4-8490-4917-9a44-a146831f29c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f8clq" [f203a2e4-8490-4917-9a44-a146831f29c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003475919s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-933067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-fdcrw" [fd807549-0d11-4c00-8f7d-9a3fc1f59b84] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004311103s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-933067 "pgrep -a kubelet"
I1209 02:38:36.198700   14552 config.go:182] Loaded profile config "calico-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-933067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gmm2t" [1a194354-916c-4093-ab4a-a966761a6ebd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gmm2t" [1a194354-916c-4093-ab4a-a966761a6ebd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003233621s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (47.297159415s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-933067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m11.298972701s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qgrpj" [203ce5c0-481b-4ec6-afe4-db17c646a2ae] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003630875s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qgrpj" [203ce5c0-481b-4ec6-afe4-db17c646a2ae] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003623441s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-485234 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.05013928s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-485234 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (32.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-933067 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (32.492820563s)
--- PASS: TestNetworkPlugins/group/bridge/Start (32.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-933067 "pgrep -a kubelet"
I1209 02:39:27.248753   14552 config.go:182] Loaded profile config "custom-flannel-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (7.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-933067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m7vhv" [c38bb9cb-2574-47b7-918a-7986323129f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m7vhv" [c38bb9cb-2574-47b7-918a-7986323129f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 7.003830166s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (7.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-933067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-933067 "pgrep -a kubelet"
I1209 02:39:53.782064   14552 config.go:182] Loaded profile config "bridge-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-933067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rzrct" [b4243eb6-bea5-418d-a4b3-de9738aac57f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rzrct" [b4243eb6-bea5-418d-a4b3-de9738aac57f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004442234s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-q6dsq" [b70a2074-cc17-4163-a3f1-d59d6bca39a1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003912694s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-933067 "pgrep -a kubelet"
I1209 02:40:00.850140   14552 config.go:182] Loaded profile config "enable-default-cni-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-933067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tr7vl" [6ef5fdf7-dc66-4252-952e-77770f794ff2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tr7vl" [6ef5fdf7-dc66-4252-952e-77770f794ff2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.00457973s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-933067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-933067 "pgrep -a kubelet"
I1209 02:40:03.970907   14552 config.go:182] Loaded profile config "flannel-933067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-933067 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hv4hr" [67f8f60f-b382-48ea-9209-bd23b013d9fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hv4hr" [67f8f60f-b382-48ea-9209-bd23b013d9fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003069336s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-933067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-933067 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-933067 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
372 TestStartStop/group/disable-driver-mounts 0.16
387 TestNetworkPlugins/group/kubenet 3.18
395 TestNetworkPlugins/group/cilium 3.47
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:823: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:543: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-894253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-894253
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-933067 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-933067" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 02:30:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-190944
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 02:30:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-099378
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 02:29:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-768415
contexts:
- context:
cluster: kubernetes-upgrade-190944
user: kubernetes-upgrade-190944
name: kubernetes-upgrade-190944
- context:
cluster: running-upgrade-099378
user: running-upgrade-099378
name: running-upgrade-099378
- context:
cluster: stopped-upgrade-768415
user: stopped-upgrade-768415
name: stopped-upgrade-768415
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-190944
user:
client-certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kubernetes-upgrade-190944/client.crt
client-key: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kubernetes-upgrade-190944/client.key
- name: running-upgrade-099378
user:
client-certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/running-upgrade-099378/client.crt
client-key: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/running-upgrade-099378/client.key
- name: stopped-upgrade-768415
user:
client-certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/stopped-upgrade-768415/client.crt
client-key: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/stopped-upgrade-768415/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-933067

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-933067"

                                                
                                                
----------------------- debugLogs end: kubenet-933067 [took: 3.010390326s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-933067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-933067
--- SKIP: TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-933067 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-933067" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 02:30:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-190944
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 02:30:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-099378
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-11001/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 02:29:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-768415
contexts:
- context:
cluster: kubernetes-upgrade-190944
user: kubernetes-upgrade-190944
name: kubernetes-upgrade-190944
- context:
cluster: running-upgrade-099378
user: running-upgrade-099378
name: running-upgrade-099378
- context:
cluster: stopped-upgrade-768415
user: stopped-upgrade-768415
name: stopped-upgrade-768415
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-190944
user:
client-certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kubernetes-upgrade-190944/client.crt
client-key: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/kubernetes-upgrade-190944/client.key
- name: running-upgrade-099378
user:
client-certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/running-upgrade-099378/client.crt
client-key: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/running-upgrade-099378/client.key
- name: stopped-upgrade-768415
user:
client-certificate: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/stopped-upgrade-768415/client.crt
client-key: /home/jenkins/minikube-integration/22081-11001/.minikube/profiles/stopped-upgrade-768415/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-933067

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-933067" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-933067"

                                                
                                                
----------------------- debugLogs end: cilium-933067 [took: 3.306876843s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-933067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-933067
--- SKIP: TestNetworkPlugins/group/cilium (3.47s)

                                                
                                    
Copied to clipboard